Este conteúdo não está disponível no idioma selecionado.
Chapter 6. Job templates
You can create both Job templates and Workflow job templates from
For Workflow job templates, see Workflow job templates.
A job template is a definition and set of parameters for running an Ansible job. Job templates are useful to run the same job many times. They also encourage the reuse of Ansible Playbook content and collaboration between teams.
The Templates page shows both job templates and workflow job templates that are currently available. The default view is collapsed (Compact), showing the template name, template type, and the timestamp of the last job that ran using that template. You can click the arrow icon next to each entry to expand and view more information. This list is sorted alphabetically by name, but you can sort by other criteria, or search by various fields and attributes of a template.
From this screen you can launch , edit , copy and delete a job template.
Workflow templates have the workflow visualizer icon as a shortcut for accessing the workflow editor.
You can use job templates to build a workflow template. Templates that show the Workflow Visualizer icon next to them are workflow templates. Clicking the icon allows you to build a workflow graphically. Many parameters in a job template enable you to select Prompt on Launch that you can change at the workflow level, and do not affect the values assigned at the job template level. For instructions, see the Workflow Visualizer section.
6.1. Creating a job template
Procedure
-
From the navigation panel, select
. - On the Templates page, select Create job template from the Create template list.
Enter the appropriate details in the following fields:
NoteIf a field has the Prompt on launch checkbox selected, launching the job prompts you for the value for that field when launching.
Most prompted values override any values set in the job template.
Exceptions are noted in the following table.
Field Options Prompt on Launch Name
Enter a name for the job.
N/A
Description
Enter an arbitrary description as appropriate (optional).
N/A
Job type
Choose a job type:
- Run: Start the playbook when launched, running Ansible tasks on the selected hosts.
- Check: Perform a "dry run" of the playbook and report changes that would be made without actually making them. Tasks that do not support check mode are missed and do not report potential changes.
For more information about job types see the Playbooks section of the Ansible documentation.
Yes
Inventory
Choose the inventory to use with this job template from the inventories available to the logged in user.
A System Administrator must grant you or your team permissions to be able to use certain inventories in a job template.
Yes.
Inventory prompts show up as its own step in a later prompt window.
Project
Select the project to use with this job template from the projects available to the user that is logged in.
N/A
Source control branch
This field is only present if you chose a project that allows branch override. Specify the overriding branch to use in your job run. If left blank, the specified SCM branch (or commit hash or tag) from the project is used.
For more information, see Job branch overriding.
Yes
Execution Environment
Select the container image to be used to run this job. You must select a project before you can select an execution environment.
Yes.
Execution environment prompts show up as its own step in a later prompt window.
Playbook
Choose the playbook to be launched with this job template from the available playbooks. This field automatically populates with the names of the playbooks found in the project base path for the selected project. Alternatively, you can enter the name of the playbook if it is not listed, such as the name of a file (such as foo.yml) you want to use to run with that playbook. If you enter a filename that is not valid, the template displays an error, or causes the job to fail.
N/A
Credentials
Select the icon to open a separate window.
Choose the credential from the available options to use with this job template.
Use the drop-down menu list to filter by credential type if the list is extensive. Some credential types are not listed because they do not apply to certain job templates.
- If selected, when launching a job template that has a default credential and supplying another credential replaces the default credential if it is the same type. The following is an example this message:
Job Template default credentials must be replaced with one of the same type. Please select a credential for the following types in order to proceed: Machine.
- You can add more credentials as you see fit.
- Credential prompts show up as its own step in a later prompt window.
Labels
-
Optionally supply labels that describe this job template, such as
dev
ortest
. - Use labels to group and filter job templates and completed jobs in the display.
- Labels are created when they are added to the job template. Labels are associated with a single Organization by using the Project that is provided in the job template. Members of the Organization can create labels on a job template if they have edit permissions (such as the admin role).
- Once you save the job template, the labels appear in the Job Templates overview in the Expanded view.
- Select beside a label to remove it. When a label is removed, it is no longer associated with that particular Job or Job Template, but it remains associated with any other jobs that reference it.
- Jobs inherit labels from the Job Template at the time of launch. If you delete a label from a Job Template, it is also deleted from the Job.
- If selected, even if a default value is supplied, you are prompted when launching to supply additional labels, if needed.
- You cannot delete existing labels, selecting only removes the newly added labels, not existing default labels.
Forks
The number of parallel or simultaneous processes to use while executing the playbook. A value of zero uses the Ansible default setting, which is five parallel processes unless overridden in
/etc/ansible/ansible.cfg
.Yes
Limit
A host pattern to further constrain the list of hosts managed or affected by the playbook. You can separate many patterns by colons (:). As with core Ansible:
- a:b means "in group a or b"
- a:b:&c means "in a or b but must be in c"
- a:!b means "in a, and definitely not in b"
For more information, see Patterns: targeting hosts and groups in the Ansible documentation.
Yes
If not selected, the job template executes against all nodes in the inventory or only the nodes predefined on the Limit field. When running as part of a workflow, the workflow job template limit is used instead.
Verbosity
Control the level of output Ansible produces as the playbook executes. Choose the verbosity from Normal to various Verbose or Debug settings. This only appears in the details report view. Verbose logging includes the output of all commands. Debug logging is exceedingly verbose and includes information about SSH operations that can be useful in certain support instances.
Verbosity
5
causes automation controller to block heavily when jobs are running, which could delay reporting that the job has finished (even though it has) and can cause the browser tab to lock up.Yes
Job slicing
Specify the number of slices you want this job template to run. Each slice runs the same tasks against a part of the inventory. For more information about job slices, see Job Slicing.
Yes
Timeout
This enables you to specify the length of time (in seconds) that the job can run before it is canceled. Consider the following for setting the timeout value:
- There is a global timeout defined in the settings which defaults to 0, indicating no timeout.
- A negative timeout (<0) on a job template is a true "no timeout" on the job.
- A timeout of 0 on a job template defaults the job to the global timeout (which is no timeout by default).
- A positive timeout sets the timeout for that job template.
Yes
Show changes
Enables you to see the changes made by Ansible tasks.
Yes
Instance groups
Choose Instance and Container Groups to associate with this job template. If the list is extensive, use the icon to narrow the options. Job template instance groups contribute to the job scheduling criteria, see Job Runtime Behavior and Control where a job runs for rules. A System Administrator must grant you or your team permissions to be able to use an instance group in a job template. Use of a container group requires admin rights.
- Yes.
If selected, you are providing the jobs preferred instance groups in order of preference. If the first group is out of capacity, later groups in the list are considered until one with capacity is available, at which point that is selected to run the job.
- If you prompt for an instance group, what you enter replaces the normal instance group hierarchy and overrides all of the organizations' and inventories' instance groups.
- The Instance Groups prompt shows up as its own step in a later prompt window.
Job tags
Type and select the Create menu to specify which parts of the playbook should be executed. For more information and examples see Tags in the Ansible documentation.
Yes
Skip tags
Type and select the Create menu to specify certain tasks or parts of the playbook to skip. For more information and examples see Tags in the Ansible documentation.
Yes
Extra variables
- Pass extra command line variables to the playbook. This is the "-e" or "-extra-vars" command line parameter for ansible-playbook that is documented in the Ansible documentation at Defining variables at runtime.
-
Give key or value pairs by using either YAML or JSON. These variables have a maximum value of precedence and overrides other variables specified elsewhere. The following is an example value:
git_branch: production release_version: 1.5
Yes.
If you want to be able to specify
extra_vars
on a schedule, you must select Prompt on launch for Variables on the job template, or enable a survey on the job template. Those answered survey questions becomeextra_vars
.You can set the following options for launching this template, if necessary:
-
Privilege escalation: If checked, you enable this playbook to run as an administrator. This is the equal of passing the
--become
option to theansible-playbook
command. - Provisioning callback: If checked, you enable a host to call back to automation controller through the REST API and start a job from this job template. For more information, see Provisioning Callbacks.
Enable webhook: If checked, you turn on the ability to interface with a predefined SCM system web service that is used to launch a job template. GitHub and GitLab are the supported SCM systems.
- If you enable webhooks, other fields display, prompting for additional information:
- Webhook service: Select which service to listen for webhooks from.
- Webhook URL: Automatically populated with the URL for the webhook service to POST requests to.
- Webhook key: Generated shared secret to be used by the webhook service to sign payloads sent to automation controller. You must configure this in the settings on the webhook service in order for automation controller to accept webhooks from this service.
Webhook credential: Optionally, give a GitHub or GitLab personal access token (PAT) as a credential to use to send status updates back to the webhook service.
Before you can select it, the credential must exist.
See Credential types to create one.
- For additional information about setting up webhooks, see Working with Webhooks.
- Concurrent jobs: If checked, you are allowing jobs in the queue to run simultaneously if not dependent on one another. Check this box if you want to run job slices simultaneously. For more information, see Automation controller capacity determination and job impact.
- Enable fact storage: If checked, automation controller stores gathered facts for all hosts in an inventory related to the job running.
- Prevent instance group fallback: Check this option to allow only the instance groups listed in the Instance Groups field to run the job. If clear, all available instances in the execution pool are used based on the hierarchy described in Control where a job runs.
-
Privilege escalation: If checked, you enable this playbook to run as an administrator. This is the equal of passing the
- Click , when you have completed configuring the details of the job template.
Creating the template does not exit the job template page but advances to the Job Template Details tab. After saving the template, you can click to start the job. You can also click to add or change the attributes of the template, such as permissions, notifications, view completed jobs, and add a survey (if the job type is not a scan). You must first save the template before launching, otherwise, remains disabled.
Verification
-
From the navigation panel, select
. - Verify that the newly created template appears on the Templates page.
6.2. Adding permissions to templates
Use the following steps to add permissions for the team.
Procedure
-
From the navigation panel, select
. - Select a template, and in the Team Access or User Access tab, click .
Select Teams or Users and click .
- Select one or more users or teams from the list by clicking the check boxes next to the names to add them as members and click .
- Choose the roles that you want users or teams to have. Ensure that you scroll down for a complete list of roles. Each resource has different options available.
- Click to apply the roles to the selected users or teams and to add them as members.
The window to add users and teams closes to display the updated roles assigned for each user and team
To remove roles for a particular user, click the icon next to its resource.
This launches a confirmation dialog, asking you to confirm the disassociation.
6.3. Deleting a job template
Before deleting a job template, ensure that it is not used in a workflow job template.
Procedure
Delete a job template by using one of these methods:
- Select the checkbox next to one or more job templates. Click and select .
- Select the required job template, on the Details page click and select .
If deleting items that are used by other work items, a message opens listing the items that are affected by the deletion and prompts you to confirm the deletion. Some screens contain items that are invalid or previously deleted, and will fail to run. The following is an example of that message:
6.4. Work with notifications
From the navigation panel, select
Use the toggles to enable or disable the notifications to use with your particular template. For more information, see Enable and disable notifications.
If no notifications have been set up, click Notification types.
to create a new notification. For more information about configuring various notification types and extended messaging, see6.5. View completed jobs
The Jobs tab provides the list of job templates that have run. Click the expand icon next to each job to view the following details:
- Status
- ID and name
- Type of job
- Time started and completed
- Who started the job and which template, inventory, project, and credential were used.
You can filter the list of completed jobs using any of these criteria.
Sliced jobs that display on this list are labeled accordingly, with the number of sliced jobs that have run.
6.6. Scheduling job templates
Access the schedules for a particular job template from the Schedules tab.
Procedure
To schedule a job template, select the Schedules tab from the job template, and select the appropriate method:
- If schedules are already set up, review, edit, enable or disable your schedule preferences.
- If schedules have not been set up, see Schedules for more information.
If you select Prompt on Launch for the Credentials field, and you create or edit scheduling information for your job template, a Prompt option displays on the Schedules form.
You cannot remove the default machine credential in the Prompt dialog without replacing it with another machine credential before you can save it.
To set extra_vars
on schedules, you must select Prompt on Launch for Variables on the job template, or configure and enable a survey on the job template.
The answered survey questions then become extra_vars
.
6.7. Surveys in job templates
Job types of Run or Check provide a way to set up surveys in the Job Template creation or editing screens. Surveys set extra variables for the playbook similar to Prompt for Extra Variables does, but in a user-friendly question and answer way. Surveys also permit for validation of user input. Select the Survey tab to create a survey.
Example
You can use surveys for several situations. For example, operations want to give developers a "push to stage" button that they can run without advance knowledge of Ansible. When launched, this task could prompt for answers to questions such as "What tag should we release?".
You can ask many types of questions, including multiple-choice questions.
6.7.1. Creating a survey
Procedure
-
From the navigation panel, select
. - Select the job template you want to create a survey for.
- From the Survey tab, click .
A survey can consist of any number of questions. For each question, enter the following information:
- Question: The question to ask the user.
- Optional: Description: A description of what is being asked of the user.
- Answer variable name: The Ansible variable name to store the user’s response in. This is the variable to be used by the playbook. Variable names cannot contain spaces.
Answer type: Choose from the following question types:
- Text: A single line of text. You can set the minimum and maximum length (in characters) for this answer.
- Textarea: A multi-line text field. You can set the minimum and maximum length (in characters) for this answer.
- Password: Responses are treated as sensitive information, much like an actual password is treated. You can set the minimum and maximum length (in characters) for this answer.
- Multiple Choice (single select): A list of options, of which only one can be selected at a time. Enter the options, one per line, in the Multiple Choice Options field.
- Multiple Choice (multiple select): A list of options, any number of which can be selected at a time. Enter the options, one per line, in the Multiple Choice Options field.
- Integer: An integer number. You can set the minimum and maximum length (in characters) for this answer.
- Float: A decimal number. You can set the minimum and maximum length (in characters) for this answer.
- Required: Whether or not an answer to this question is required from the user.
- Minimum length and Maximum length: Specify if a certain length in the answer is required.
- Default answer: The default answer to the question. This value is pre-filled in the interface and is used if the answer is not provided by the user.
Once you have entered the question information, click
to add the question.The survey question displays in the Survey list. For any question, you can click to edit it.
Check the box next to each question and click
to delete the question, or use the toggle option in the menu bar to enable or disable the survey prompts.If you have more than one survey question, click
to rearrange the order of the questions by clicking and dragging on the grid icon.- To add more questions, click .
6.7.2. Optional survey questions
The Required setting on a survey question determines whether the answer is optional or not for the user interacting with it.
Optional survey variables can also be passed to the playbook in extra_vars
.
-
If a non-text variable (input type) is marked as optional, and is not filled in, no survey
extra_var
is passed to the playbook. -
If a text input or text area input is marked as optional, is not filled in, and has a minimum
length > 0
, no surveyextra_var
is passed to the playbook. -
If a text input or text area input is marked as optional, is not filled in, and has a minimum
length === 0
, that surveyextra_var
is passed to the playbook, with the value set to an empty string ("").
6.8. Launching a job template
A benefit of automation controller is the push-button deployment of Ansible playbooks. You can configure a template to store all the parameters that you would normally pass to the Ansible Playbook on the command line. In addition to the playbooks, the template passes the inventory, credentials, extra variables, and all options and settings that you can specify on the command line.
Easier deployments drive consistency, by running your playbooks the same way each time, and allowing you to delegate responsibilities.
Procedure
Launch a job template by using one of these methods:
-
From the navigation panel, select
and click Launch template next to the job template. - In the job template Details tab of the job template you want to launch, click .
-
From the navigation panel, select
A job can require additional information to run. The following data can be requested at launch:
- Credentials that were setup
- The option Prompt on Launch is selected for any parameter
- Passwords or passphrases that have been set to Ask
- A survey, if one has been configured for the job templates
- Extra variables, if requested by the job template
If a job has user-provided values, then those are respected upon relaunch. If the user did not specify a value, then the job uses the default value from the job template. Jobs are not relaunched as-is. They are relaunched with the user prompts re-applied to the job template.
If you give values on one tab, return to a previous tab, continuing to the next tab results in having to re-provide values on the rest of the tabs. Ensure that you complete the tabs in the order that the prompts appear.
When launching, automation controller automatically redirects the web browser to the Job Status page for this job under the Jobs tab.
You can re-launch the most recent job from the list view to re-run on all hosts or just failed hosts in the specified inventory. For more information, see the Jobs in automation controller section.
When slice jobs are running, job lists display the workflow and job slices, and a link to view their details individually.
You can launch jobs in bulk by using the newly added endpoint in the API, /api/v2/bulk/job_launch
. This endpoint accepts JSON and you can specify a list of unified job templates (such as job templates and project updates) to launch. The user must have the appropriate permission to launch all the jobs. If all jobs are not launched an error is returned indicating why the operation was not able to complete. Use the OPTIONS
request to return relevant schema. For more information, see the Bulk endpoint of the Reference section of the Automation Controller API Guide.
6.9. Copying a job template
If you copy a job template, it does not copy any associated schedule, notifications, or permissions. Schedules and notifications must be recreated by the user or administrator creating the copy of the job template. The user copying the Job Template is granted administrator permission, but no permissions are assigned (copied) to the job template.
Procedure
-
From the navigation panel, select
. Click and the copy icon associated with the template that you want to copy.
- The new template with the name of the template from which you copied and a timestamp displays in the list of templates.
- Click to open the new template and click .
- Replace the contents of the Name field with a new name, and give or change the entries in the other fields to complete this page.
- Click .
6.10. Scan job templates
Scan jobs are no longer supported starting with automation controller 3.2. This system tracking feature was used as a way to capture and store facts as historical data. Facts are now stored in the controller through fact caching. For more information, see Fact Caching.
Job template scan jobs in your system before automation controller 3.2, are converted to type run, like normal job templates. They retain their associated resources, such as inventories and credentials. By default, job template scan jobs that do not have a related project are assigned a special playbook. You can also specify a project with your own scan playbook. A project is created for each organization that points to awx-facts-playbooks and the job template was set to the playbook: https://github.com/ansible/tower-fact-modules/blob/master/scan_facts.yml.
6.10.1. Fact scan playbooks
The scan job playbook, scan_facts.yml
, contains invocations of three fact scan modules
- packages, services, and files, along with Ansible’s standard fact gathering. The scan_facts.yml
playbook file is similar to this:
- hosts: all vars: scan_use_checksum: false scan_use_recursive: false tasks: - scan_packages: - scan_services: - scan_files: paths: '{{ scan_file_paths }}' get_checksum: '{{ scan_use_checksum }}' recursive: '{{ scan_use_recursive }}' when: scan_file_paths is defined
The scan_files
fact module is the only module that accepts parameters, passed through extra_vars
on the scan job template:
scan_file_paths
: /tmp/
scan_use_checksum
: true scan_use_recursive
: true
-
The
scan_file_paths
parameter can have multiple settings (such as/tmp/
or/var/log
). -
The
scan_use_checksum
andscan_use_recursive
parameters can also be set to false or omitted. An omission is the same as a false setting.
Scan job templates should enable become
and use credentials
for which become
is a possibility. You can enable become
by checking Privilege Escalation from the options list:
6.10.2. Supported OSes for scan_facts.yml
If you use the scan_facts.yml
playbook with use fact cache, ensure that you are using one of the following supported operating systems:
- Red Hat Enterprise Linux 5, 6, 7, 8, and 9
- Ubuntu 23.04 (Support for Ubuntu is deprecated and will be removed in a future release)
- OEL 6 and 7
- SLES 11 and 12
- Debian 6, 7, 8, 9, 10, 11, and 12
- Fedora 22, 23, and 24
- Amazon Linux 2023.1.20230912
Some of these operating systems require initial configuration to run python or have access to the python packages, such as python-apt
, which the scan modules depend on.
6.10.3. Pre-scan setup
The following are examples of playbooks that configure certain distributions so that scan jobs can be run against them:
Bootstrap Ubuntu (16.04) --- - name: Get Ubuntu 16, and on ready hosts: all sudo: yes gather_facts: no tasks: - name: install python-simplejson raw: sudo apt-get -y update raw: sudo apt-get -y install python-simplejson raw: sudo apt-get install python-apt Bootstrap Fedora (23, 24) --- - name: Get Fedora ready hosts: all sudo: yes gather_facts: no tasks: - name: install python-simplejson raw: sudo dnf -y update raw: sudo dnf -y install python-simplejson raw: sudo dnf -y install rpm-python
6.10.4. Custom fact scans
A playbook for a custom fact scan is similar to the example in the Fact scan playbooks section. For example, a playbook that only uses a custom scan_foo
Ansible fact module looks similar to this:
scan_foo.py: def main(): module = AnsibleModule( argument_spec = dict()) foo = [ { "hello": "world" }, { "foo": "bar" } ] results = dict(ansible_facts=dict(foo=foo)) module.exit_json(**results) main()
To use a custom fact module, ensure that it lives in the /library/
subdirectory of the Ansible project used in the scan job template. This fact scan module returns a hard-coded set of facts:
[ { "hello": "world" }, { "foo": "bar" } ]
For more information, see the Developing modules section of the Ansible documentation.
6.10.5. Fact caching
Automation controller can store and retrieve facts on a per-host basis through an Ansible Fact Cache plugin. This behavior is configurable on a per-job template basis. Fact caching is turned off by default but can be enabled to serve fact requests for all hosts in an inventory related to the job running. This enables you to use job templates with --limit
while still having access to the entire inventory of host facts. You can specify a global timeout setting that the plugin enforces per-host, (in seconds) from the navigation panel, select
After launching a job that uses fact cache (use_fact_cache=True
), each host’s ansible_facts
are all stored by the controller in the job’s inventory.
The Ansible Fact Cache plugin that includes automation controller is enabled on jobs with fact cache enabled (use_fact_cache=True
).
When a job that has fact cache enabled (use_fact_cache=True
) has run, automation controller restores all records for the hosts in the inventory. Any records with update times newer than the currently stored facts per-host are updated in the database.
New and changed facts are logged through automation controller’s logging facility. Specifically, to the system_tracking namespace
or logger. The logging payload includes the following fields:
-
host_name
-
inventory_id
-
ansible_facts
ansible facts
is a dictionary of all Ansible facts for host_name
in the automation controller inventory, inventory_id
.
If a hostname includes a forward slash (/), fact cache does not work for that host. If you have an inventory with 100 hosts and one host has a / in the name, the remaining 99 hosts still collect facts.
6.10.6. Benefits of fact caching
Fact caching saves you time over running fact gathering. If you have a playbook in a job that runs against a thousand hosts and forks, you can spend 10 minutes gathering facts across all of those hosts. However, if you run a job on a regular basis, the first run of it caches these facts and the next run pulls them from the database. This reduces the runtime of jobs against large inventories, including Smart Inventories.
Do not change the ansible.cfg file to apply fact caching. Custom fact caching could conflict with the controller’s fact caching feature. You must use the fact caching module that includes automation controller.
You can select to use cached facts in your job by checking the Enable fact storage option when you create or edit a job template.
To clear facts, run the Ansible clear_facts
meta task. The following is an example playbook that uses the Ansible clear_facts
meta task.
- hosts: all gather_facts: false tasks: - name: Clear gathered facts from all currently targeted hosts meta: clear_facts
You can find the API endpoint for fact caching at:
http://<controller server name>/api/v2/hosts/x/ansible_facts
6.11. Use Cloud Credentials with a cloud inventory
Cloud Credentials can be used when syncing a cloud inventory. They can also be associated with a job template and included in the runtime environment for use by a playbook. The following Cloud Credentials are supported:
6.11.1. OpenStack
The following sample playbook invokes the nova_compute
Ansible OpenStack cloud module and requires credentials:
-
auth_url
-
username
-
password
-
project name
These fields are made available to the playbook through the environmental variable OS_CLIENT_CONFIG_FILE
, which points to a YAML file written by the controller based on the contents of the cloud credential. The following sample playbooks load the YAML file into the Ansible variable space:
- OS_CLIENT_CONFIG_FILE example:
clouds: devstack: auth: auth_url: http://devstack.yoursite.com:5000/v2.0/ username: admin password: your_password_here project_name: demo
- Playbook example:
- hosts: all gather_facts: false vars: config_file: "{{ lookup('env', 'OS_CLIENT_CONFIG_FILE') }}" nova_tenant_name: demo nova_image_name: "cirros-0.3.2-x86_64-uec" nova_instance_name: autobot nova_instance_state: 'present' nova_flavor_name: m1.nano nova_group: group_name: antarctica instance_name: deceptacon instance_count: 3 tasks: - debug: msg="{{ config_file }}" - stat: path="{{ config_file }}" register: st - include_vars: "{{ config_file }}" when: st.stat.exists and st.stat.isreg - name: "Print out clouds variable" debug: msg="{{ clouds|default('No clouds found') }}" - name: "Setting nova instance state to: {{ nova_instance_state }}" local_action: module: nova_compute login_username: "{{ clouds.devstack.auth.username }}" login_password: "{{ clouds.devstack.auth.password }}"
6.11.2. Amazon Web Services
Amazon Web Services (AWS) cloud credentials are exposed as the following environment variables during playbook execution (in the job template, choose the cloud credential needed for your setup):
-
AWS_ACCESS_KEY_ID
-
AWS-SECRET_ACCESS_KEY
Each AWS module implicitly uses these credentials when run through the controller without having to set the aws_access_key_id
or aws_secret_access_key
module options.
6.11.3. Google
Google cloud credentials are exposed as the following environment variables during playbook execution (in the job template, choose the cloud credential needed for your setup):
-
GCE_EMAIL
-
GCE_PROJECT
-
GCE_CREDENTIALS_FILE_PATH
Each Google module implicitly uses these credentials when run through the controller without having to set the service_account_email
, project_id
, or pem_file
module options.
6.11.4. Azure
Azure cloud credentials are exposed as the following environment variables during playbook execution (in the job template, choose the cloud credential needed for your setup):
-
AZURE_SUBSCRIPTION_ID
-
AZURE_CERT_PATH
Each Azure module implicitly uses these credentials when run via the controller without having to set the subscription_id
or management_cert_path
module options.
6.11.5. VMware
VMware cloud credentials are exposed as the following environment variables during playbook execution (in the job template, choose the cloud credential needed for your setup):
-
VMWARE_USER
-
VMWARE_PASSWORD
-
VMWARE_HOST
The following sample playbook demonstrates the usage of these credentials:
- vsphere_guest: vcenter_hostname: "{{ lookup('env', 'VMWARE_HOST') }}" username: "{{ lookup('env', 'VMWARE_USER') }}" password: "{{ lookup('env', 'VMWARE_PASSWORD') }}" guest: newvm001 from_template: yes template_src: linuxTemplate cluster: MainCluster resource_pool: "/Resources" vm_extra_config: folder: MyFolder
6.12. Provisioning Callbacks
Provisioning Callbacks are a feature of automation controller that enable a host to start a playbook run against itself, rather than waiting for a user to launch a job to manage the host from the automation controller console.
Provisioning Callbacks are only used to run playbooks on the calling host and are meant for cloud bursting. Cloud bursting is a cloud computing configuration that enables a private cloud to access public cloud resources by "bursting" into a public cloud when computing demand spikes.
Example
New instances with a need for client to server communication for configuration, such as transmitting an authorization key, not to run a job against another host. This provides for automatically configuring the following:
- A system after it has been provisioned by another system (such as AWS auto-scaling, or an OS provisioning system like kickstart or preseed).
- Launching a job programmatically without invoking the automation controller API directly.
The job template launched only runs against the host requesting the provisioning.
This is often accessed with a firstboot type script or from cron
.
6.12.1. Enabling Provisioning Callbacks
Procedure
To enable callbacks, check the Provisioning callback option in the job template. This displays Provisioning callback details for the job template.
NoteIf you intend to use automation controller’s provisioning callback feature with a dynamic inventory, set Update on Launch for the inventory group used in the job template.
Callbacks also require a host config key, to ensure that foreign hosts with the URL cannot request configuration. Give a custom value for the Host config key. The host key can be reused across many hosts to apply this job template against multiple hosts. If you want to control what hosts are able to request configuration, you can change the key can at any time.
To callback manually using REST:
Procedure
Examine the callback URL in the UI, in the form: https://<CONTROLLER_SERVER_NAME>/api/v2/job_templates/7/callback/
- The "7" in the sample URL is the job template ID in automation controller.
Ensure that the request from the host is a POST. The following is an example using
curl
(all on a single line):curl -k -f -i -H 'Content-Type:application/json' -XPOST -d '{"host_config_key": "redhat"}' \ https://<CONTROLLER_SERVER_NAME>/api/v2/job_templates/7/callback/
- Ensure that the requesting host is defined in your inventory for the callback to succeed.
Troubleshooting
If automation controller fails to locate the host either by name or IP address in one of your defined inventories, the request is denied. When running a job template in this way, ensure that the host initiating the playbook run against itself is in the inventory. If the host is missing from the inventory, the job template fails with a No Hosts Matched type error message.
If your host is not in the inventory and Update on Launch is checked for the inventory group, automation controller attempts to update cloud based inventory sources before running the callback.
Verification
Successful requests result in an entry on the Jobs tab, where you can view the results and history. You can access the callback by using REST, but the suggested method of using the callback is to use one of the example scripts that includes automation controller:
-
/usr/share/awx/request_tower_configuration.sh
(Linux/UNIX) -
/usr/share/awx/request_tower_configuration.ps1
(Windows)
Their usage is described in the source code of the file by passing the -h
flag, as the following shows:
./request_tower_configuration.sh -h Usage: ./request_tower_configuration.sh <options> Request server configuration from Ansible Tower. OPTIONS: -h Show this message -s Controller server (e.g. https://ac.example.com) (required) -k Allow insecure SSL connections and transfers -c Host config key (required) -t Job template ID (required) -e Extra variables
This script can retry commands and is therefore a more robust way to use callbacks than a simple curl
request. The script retries once per minute for up to ten minutes.
This is an example script. Edit this script if you need more dynamic behavior when detecting failure scenarios, as any non-200 error code may not be a transient error requiring retry.
You can use callbacks with dynamic inventory in automation controller. For example, when pulling cloud inventory from one of the supported cloud providers. In these cases, along with setting Update On Launch, ensure that you configure an inventory cache timeout for the inventory source, to avoid hammering of your cloud’s API endpoints. Since the request_tower_configuration.sh
script polls once per minute for up to ten minutes, a suggested cache invalidation time for inventory (configured on the inventory source itself) would be one or two minutes.
Running the request_tower_configuration.sh
script from a cron job is not recommended, however, a suggested cron interval is every 30 minutes. Repeated configuration can be handled by scheduling automation controller so that the primary use of callbacks by most users is to enable a base image that is bootstrapped into the latest configuration when coming online. Running at first boot is best practice. First boot scripts are init scripts that typically self-delete, so you set up an init script that calls a copy of the request_tower_configuration.sh
script and make that into an auto scaling image.
6.12.2. Passing extra variables to Provisioning Callbacks
You can pass extra_vars
in Provisioning Callbacks the same way you can in a regular job template. To pass extra_vars
, the data sent must be part of the body of the POST as application or JSON, as the content type.
Procedure
Pass extra variables by using one of these methods:
Use the following JSON format as an example when adding your own
extra_vars
to be passed:'{"extra_vars": {"variable1":"value1","variable2":"value2",...}}'
Pass extra variables to the job template call using
curl
:root@localhost:~$ curl -f -H 'Content-Type: application/json' -XPOST \ -d '{"host_config_key": "redhat", "extra_vars": "{\"foo\": \"bar\"}"}' \ https://<CONTROLLER_SERVER_NAME>/api/v2/job_templates/7/callback
For more information, see Launching Jobs with Curl in Configuring automation execution.
6.13. Extra variables
When you pass survey variables, they are passed as extra variables (extra_vars
) within automation controller. However, passing extra variables to a job template (as you would do with a survey) can override other variables being passed from the inventory and project.
By default, extra_vars
are marked as !unsafe
unless you specify them on the Job Template’s Extra Variables section. These are trusted, because they can only be added by users with enough privileges to add or edit a Job Template. For example, nested variables do not expand when entered as a prompt, as the Jinja brackets are treated as a string. For more information about unsafe variables, see Unsafe or raw strings.
extra_vars
passed to the job launch API are only honored if one of the following is true:
- They correspond to variables in an enabled survey.
-
ask_variables_on_launch
is set to True.
Example
You have a defined variable for an inventory for debug = true
. It is possible that this variable, debug = true
, can be overridden in a job template survey.
To ensure the variables that you pass are not overridden, ensure they are included by redefining them in the survey. You can define extra variables at the inventory, group, and host levels.
If you are specifying the ALLOW_JINJA_IN_EXTRA_VARS
parameter, see the The ALLOW_JINJA_IN_EXTRA_VARS variable section of Configuring automation execution to configure it.
The job template extra variables dictionary is merged with the survey variables.
The following are some simplified examples of extra_vars
in YAML and JSON formats:
- The configuration in YAML format:
launch_to_orbit: true satellites: - sputnik - explorer - satcom
- The configuration in JSON format:
{ "launch_to_orbit": true, "satellites": ["sputnik", "explorer", "satcom"] }
The following table notes the behavior (hierarchy) of variable precedence in automation controller as it compares to variable precedence in Ansible.
Ansible | automation controller |
---|---|
role defaults | role defaults |
dynamic inventory variables | dynamic inventory variables |
inventory variables | automation controller inventory variables |
inventory | automation controller group variables |
inventory | automation controller host variables |
playbook |
playbook |
playbook |
playbook |
host facts | host facts |
registered variables | registered variables |
set facts | set facts |
play variables | play variables |
play | (not supported) |
play |
play |
role and include variables | role and include variables |
block variables | block variables |
task variables | task variables |
extra variables | Job Template extra variables |
Job Template Survey (defaults) | |
Job Launch extra variables |
6.13.1. Relaunch a job template
Instead of manually relaunching a job, a relaunch is denoted by setting launch_type
to relaunch
. The relaunch behavior deviates from the launch behavior in that it does not inherit extra_vars
.
Job relaunching does not go through the inherit logic. It uses the same extra_vars
that were calculated for the job being relaunched.
Example
You launch a job template with no extra_vars
which results in the creation of a job called j1. Then you edit the job template and add extra_vars
(such as adding "{ "hello": "world" }"
).
Relaunching j1 results in the creation of j2, but because there is no inherit logic and j1 has no extra_vars, j2 does not have any extra_vars
.
If you launch the job template with the extra_vars
that you added after the creation of j1, the relaunch job created (j3) includes the extra_vars. Relaunching j3 results in the creation of j4, which also includes extra_vars
.