此内容没有您所选择的语言版本。
Chapter 16. The awx-manage Utility
Use the awx-manage utility to access detailed internal information of automation controller. Commands for awx-manage must run as the awx user only.
16.1. Inventory Import 复制链接链接已复制到粘贴板!
awx-manage is a mechanism by which an automation controller administrator can import inventory directly into automation controller.
To use awx-manage properly, you must first create an inventory in automation controller to use as the destination for the import.
For help with awx-manage, run the following command:
awx-manage inventory_import [--help]
The inventory_import command synchronizes an automation controller inventory object with a text-based inventory file, dynamic inventory script, or a directory of one or more, as supported by core Ansible.
When running this command, specify either an --inventory-id or --inventory-name, and the path to the Ansible inventory source (--source).
awx-manage inventory_import --source=/ansible/inventory/ --inventory-id=1
By default, inventory data already stored in automation controller blends with data from the external source.
To use only the external data, specify --overwrite.
To specify that any existing hosts get variable data exclusively from the --source, specify --overwrite_vars.
The default behavior adds any new variables from the external source, overwriting keys that already exist, but preserving any variables that were not sourced from the external data source.
awx-manage inventory_import --source=/ansible/inventory/ --inventory-id=1 --overwrite
Edits and additions to Inventory host variables persist beyond an inventory synchronization as long as --overwrite_vars is not set.
16.2. Cleanup of old data 复制链接链接已复制到粘贴板!
awx-manage has a variety of commands used to clean old data from automation controller. Automation controller administrators can use the automation controller Management Jobs interface for access or use the command line.
-
awx-manage cleanup_jobs [--help]
This permanently deletes the job details and job output for jobs older than a specified number of days.
-
awx-manage cleanup_activitystream [--help]
This permanently deletes any [Activity stream] data older than a specific number of days.
16.3. Cluster management 复制链接链接已复制到粘贴板!
<<<<<<< HEAD For more information about the awx-manage provision_instance and awx-manage deprovision_instance commands, see Clustering.
This section describes how to manage a automation controller cluster by provisioning and deprovisioning cluster instances. Automation controller uses the awx-manage command-line tool to manage cluster instances.
For more information about the awx-manage provision_instance and awx-manage deprovision_instance commands, see Clustering. >>>>>>> 2fea2f7c (DITA changes for controller guides (#4778))
Do not run other awx-manage commands unless instructed by Ansible Support.
= Analytics gathering
Use this command to gather analytics on-demand outside of the predefined window (the default is 4 hours):
$ awx-manage gather_analytics --ship
For customers with disconnected environments who want to collect usage information about unique hosts automated across a time period, use this command:
awx-manage host_metric --since YYYY-MM-DD --json
The --since parameter is optional.
The --json flag specifies the output format and is optional.
= Backup and restore
You can backup and restore your system by using the Ansible Automation Platform setup playbook.
For more information, see the Backup and restore clustered environments section.
When backing up Ansible Automation Platform, use the installation program that matches your currently installed version of Ansible Automation Platform.
When restoring Ansible Automation Platform, use the latest installation program available at the time of the restore. For example, if you are restoring a backup taken from version 2.6-1, use the latest 2.6-x installation program available at the time of the restore.
Backup and restore functionality only works with the PostgreSQL versions supported by your current Ansible Automation Platform version. For more information, see System requirements in Planning your installation.
The Ansible Automation Platform setup playbook is invoked as setup.sh from the path where you unpacked the platform installer tar file. It uses the same inventory file used by the install playbook. The setup script takes the following arguments for backing up and restoring:
-
-b: Perform a database backup rather than an installation. -
-r: Perform a database restore rather than an installation.
As the root user, call setup.sh with the appropriate parameters and the Ansible Automation Platform backup or restored as configured:
root@localhost:~# ./setup.sh -b root@localhost:~# ./setup.sh -r
root@localhost:~# ./setup.sh -b
root@localhost:~# ./setup.sh -r
Backup files are created on the same path that setup.sh script exists. You can change it by specifying the following EXTRA_VARS:
root@localhost:~# ./setup.sh -e 'backup_dest=/path/to/backup_dir/' -b
root@localhost:~# ./setup.sh -e 'backup_dest=/path/to/backup_dir/' -b
A default restore path is used unless you provide EXTRA_VARS with a non-default path, as shown in the following example:
root@localhost:~# ./setup.sh -e 'restore_backup_file=/path/to/nondefault/backup.tar.gz' -r
root@localhost:~# ./setup.sh -e 'restore_backup_file=/path/to/nondefault/backup.tar.gz' -r
Optionally, you can override the inventory file used by passing it as an argument to the setup script:
setup.sh -i <inventory file>
setup.sh -i <inventory file>
= Backup and restore playbooks
automation controller includes playbooks to backup and restore your installation.
In addition to the install.yml file included with your setup.sh setup playbook, there are also backup.yml and restore.yml files.
These playbooks serve to backup and restore.
The overall backup, backs up:
- The database
-
The
SECRET_KEYfile
The per-system backups include:
- Custom configuration files
- Manual projects
- The restore backup restores the backed up files and data to a freshly installed and working second instance of automation controller.
When restoring your system, installation program checks to see that the backup file exists before beginning the restoration. If the backup file is not available, your restoration fails.
Make sure that your automation controller hosts are properly set up with SSH keys, user or pass variables in the hosts file, and that the user has sudo access.
= Backup and restoration considerations
Consider the following points when you backup and restore your system:
- Disk space
- Review your disk space requirements to ensure you have enough room to backup configuration files, keys, other relevant files, and the database of the Ansible Automation Platform installation.
- System credentials
Confirm you have the required system credentials when working with a local database or a remote database. On local systems, you might need
rootorsudoaccess, depending on how credentials are set up. On remote systems, you might need different credentials to grant you access to the remote system you are trying to backup or restore.NoteThe Ansible Automation Platform database backups are staged on each node at
/var/backups/automation-platformthrough the variablebackup_dir. You might need to mount a new volume to/var/backupsor change the staging location with the variablebackup_dirto prevent issues with disk space before running the./setup.sh -bscript.- Version
- You must always use the most recent minor version of a release to backup or restore your Ansible Automation Platform installation version. For example, if the current platform version you are on is 2.0.x, only use the latest 2.0 installer.
- File path
-
When using
setup.shto do a restore from the default restore file path,/var/lib/awx,-ris still required to do the restore, but it no longer accepts an argument. If a non-default restore file path is needed, you must provide this as an extra_var (root@localhost:~# ./setup.sh -e 'restore_backup_file=/path/to/nondefault/backup.tar.gz' -r). - Directory
-
If the backup file is placed in the same directory as the
setup.shinstaller, the restore playbook automatically locates the restore files. In this case, you do not need to use therestore_backup_fileextra var to specify the location of the backup file.
= Backup and restore clustered environments
The procedure for backup and restore for a clustered environment is similar to a single install, except for some of the following considerations:
For more information about installing clustered environments, see the Install and configure section.
- If restoring to a new cluster, ensure that the old cluster is shut down before proceeding because they can conflict with each other when accessing the database.
- Per-node backups are only restored to nodes bearing the same hostname as the backup.
When restoring to an existing cluster, the restore has the following:
- A dump of the PostgreSQL database
- UI artifacts, included in the database dump
-
An automation controller configuration (retrieved from
/etc/tower) - An automation controller secret key
- Manual projects
= Restore to a different cluster
When restoring a backup to a separate instance or cluster, manual projects and custom settings under /etc/tower are retained. Job output and job events are stored in the database, and therefore, not affected.
The restore process does not alter instance groups present before the restore. It does not introduce any new instance groups either. Restored automation controller resources that were associated to instance groups likely need to be reassigned to instance groups present on the new automation controller cluster.
= Usability Analytics and Data Collection
Usability data collection is included with automation controller to collect data to better understand how automation controller users interact with it.
Only users installing a trial of or a fresh installation of are opted-in for this data collection.
Automation controller collects user data automatically to help improve the product.
For information about setting up Automation Analytics, see Configuring Automation Analytics.
= Automation Analytics
When you imported your license for the first time, you were automatically opted in for the collection of data that powers Automation Analytics, a cloud service that is part of the Ansible Automation Platform subscription.
For opt-in of Automation Analytics to have any effect, your instance of automation controller must be running on Red Hat Enterprise Linux.
As with Red Hat Insights, Automation Analytics is built to collect the minimum amount of data needed. No credential secrets, personal data, automation variables, or task output is gathered.
When you imported your license for the first time, you were automatically opted in to Automation Analytics. To configure or disable this feature, see Configuring Automation Analytics.
By default, the data is collected every four hours. When you enable this feature, data is collected up to a month in arrears (or until the previous collection). You can turn off this data collection at any time in the Miscellaneous System settings of the System configuration window.
This setting can also be enabled through the API by specifying INSIGHTS_TRACKING_STATE = true in either of these endpoints:
-
api/v2/settings/all -
api/v2/settings/system
The Automation Analytics generated from this data collection can be found on the Red Hat Cloud Services portal.
Clusters data is the default view. This graph represents the number of job runs across all automation controller clusters over a period of time. The previous example shows a span of a week in a stacked bar-style chart that is organized by the number of jobs that ran successfully (in green) and jobs that failed (in red).
Alternatively, you can select a single cluster to view its job status information.
This multi-line chart represents the number of job runs for a single automation controller cluster for a specified period of time. The preceding example shows a span of a week, organized by the number of successfully running jobs (in green) and jobs that failed (in red). You can specify the number of successful and failed job runs for a selected cluster over a span of one week, two weeks, and monthly increments.
On the clouds navigation panel, select to view information for the following:
The organization statistics page will be deprecated in a future release.
= Use by organization
The following chart represents the number of tasks run inside all jobs by a particular organization.
= Job runs by organization
This chart represents automation controller use across all automation controller clusters by organization, calculated by the number of jobs run by that organization.
= Organization status
This bar chart represents automation controller use by organization and date, which is calculated by the number of jobs run by that organization on a particular date.
Alternatively, you can specify to show the number of job runs per organization in one week, two weeks, and monthly increments.
= Details of data collection
Automation Analytics collects the following classes of data from automation controller:
- Basic configuration, such as which features are enabled, and what operating system is being used
- Topology and status of the automation controller environment and hosts, including capacity and health
Counts of automation resources:
- organizations, teams, and users
- inventories and hosts
- credentials (indexed by type)
- projects (indexed by type)
- templates
- schedules
- active sessions
- running and pending jobs
- Job execution details (start time, finish time, launch type, and success)
- Automation task details (success, host id, playbook/role, task name, and module used)
You can use awx-manage gather_analytics (without --ship) to inspect the data that automation controller sends, so that you can satisfy your data collection concerns. This creates a .tar file that contains the analytics data that is sent to Red Hat.
This file contains several JSON and CSV files. Each file contains a different set of analytics data.
- manifest.json
- config.json
- instance_info.json
- counts.json
- org_counts.json
- cred_type_counts.json
- inventory_counts.json
- projects_by_scm_type.json
- query_info.json
- job_counts.json
- job_instance_counts.json
- unified_job_template_table.csv
- unified_jobs_table.csv
- workflow_job_template_node_table.csv
- workflow_job_node_table.csv
- events_table.csv
= manifest.json
manifest.json is the manifest of the analytics data. It describes each file included in the collection, and what version of the schema for that file is included.
The following is an example manifest.json file:
= config.json
The config.json file contains a subset of the configuration endpoint /api/v2/config from the cluster. An example config.json is:
Which includes the following fields:
- ansible_version: The system Ansible version on the host
- authentication_backends: The user authentication backends that are available. For more information, see Configuring an authentication type.
- external_logger_enabled: Whether external logging is enabled
- external_logger_type: What logging backend is in use if enabled. For more information, see Logging and aggregation.
- logging_aggregators: What logging categories are sent to external logging. For more information, see Logging and aggregation.
- free_instances: How many hosts are available in the license. A value of zero means the cluster is fully consuming its license.
- install_uuid: A UUID for the installation (identical for all cluster nodes)
- instance_uuid: A UUID for the instance (different for each cluster node)
- license_expiry: Time to expiry of the license, in seconds
- license_type: The type of the license (should be 'enterprise' for most cases)
-
pendo_tracking: State of
usability_data_collection - platform: The operating system the cluster is running on
- total_licensed_instances: The total number of hosts in the license
- controller_url_base: The base URL for the cluster used by clients (shown in Automation Analytics)
- controller_version: Version of the software on the cluster
= instance_info.json
Automation controller generates an instance_info.json file that provides detailed information about each instance in the cluster. This file is typically located in the /var/lib/<controller_service_name>/instance_info.json path on the controller node.
The instance_info.json file contains detailed information on the instances that make up the cluster, organized by instance UUID.
The following is an example instance_info.json file:
Which includes the following fields:
- capacity: The capacity of the instance for executing tasks.
- cpu: Processor cores for the instance
- memory: Memory for the instance
- enabled: Whether the instance is enabled and accepting tasks
- managed_by_policy: Whether the instance’s membership in instance groups is managed by policy, or manually managed
- version: Version of the software on the instance
= counts.json
The counts.json file contains the total number of objects for each relevant category in a cluster.
The following is an example counts.json file:
Each entry in this file is for the corresponding API objects in /api/v2, with the exception of the active session counts.
= org_counts.json
The org_counts.json file contains information on each organization in the cluster, and the number of users and teams associated with that organization.
The following is an example org_counts.json file:
= cred_type_counts.json
Automation controller generates a cred_type_counts.json file that provides a summary of credential types and their counts in the cluster. This file is typically located in the /var/lib/<controller_service_name>/cred_type_counts.json path on the controller node.
The cred_type_counts.json file has information about different credential types in the cluster, and how many credentials exist for each type.
The following is an example cred_type_counts.json file:
= inventory_counts.json
The inventory_counts.json file contains information on the different inventories in the cluster.
The following is an example inventory_counts.json file:
= projects_by_scm_type.json
The projects_by_scm_type.json file provides a breakdown of all projects in the cluster, by source control type.
The following is an example projects_by_scm_type.json file:
= query_info.json
The query_info.json file provides details on when and how the data collection happened.
The following is an example query_info.json file:
{
"collection_type": "manual",
"current_time": "2019-11-22 20:10:27.751267+00:00",
"last_run": "2019-11-22 20:03:40.361225+00:00"
}
{
"collection_type": "manual",
"current_time": "2019-11-22 20:10:27.751267+00:00",
"last_run": "2019-11-22 20:03:40.361225+00:00"
}
collection_type is one of manual or automatic.
= job_counts.json
The job_counts.json file provides details on the job history of the cluster, describing both how jobs were launched, and what their finishing status is.
The following is an example job_counts.json file:
= job_instance_counts.json
The job_instance_counts.json file provides the same detail as job_counts.json, broken down by instance.
The following is an example job_instance_counts.json file:
Note that instances in this file are by hostname, not by UUID as they are in instance_info.
= unified_job_template_table.csv
The unified_job_template_table.csv file provides information on job templates in the system. Each line contains the following fields for the job template:
- id: Job template id.
- name: Job template name.
- polymorphic_ctype_id: The id of the type of template it is.
-
model: The name of the
polymorphic_ctype_idfor the template. Examples includeproject,systemjobtemplate,jobtemplate,inventorysource, andworkflowjobtemplate. - created: When the template was created.
- modified: When the template was last updated.
-
created_by_id: The
useridthat created the template. Blank if done by the system. -
modified_by_id: The
useridthat last modified the template. Blank if done by the system. - current_job_id: Currently executing job id for the template, if any.
- last_job_id: Last execution of the job.
- last_job_run: Time of last execution of the job.
-
last_job_failed: Whether the
last_job_idfailed. -
status: Status of
last_job_id. - next_job_run: Next scheduled execution of the template, if any.
-
next_schedule_id: Schedule id for
next_job_run, if any.
= unified_jobs_table.csv
The unified_jobs_table.csv file provides information on jobs run by the system.
Each line contains the following fields for a job:
- id: Job id.
- name: Job name (from the template).
- polymorphic_ctype_id: The id of the type of job it is.
-
model: The name of the
polymorphic_ctype_idfor the job. Examples includejobandworkflow. - organization_id: The organization ID for the job.
-
organization_name: Name for the
organization_id. - created: When the job record was created.
- started: When the job started executing.
- finished: When the job finished.
- elapsed: Elapsed time for the job in seconds.
- unified_job_template_id: The template for this job.
-
launch_type: One of
manual,scheduled,relaunched,scm,workflow, ordependency. - schedule_id: The id of the schedule that launched the job, if any,
- instance_group_id: The instance group that executed the job.
- execution_node: The node that executed the job (hostname, not UUID).
- controller_node: The automation controller node for the job, if run as an isolated job, or in a container group.
- cancel_flag: Whether the job was canceled.
- status: Status of the job.
- failed: Whether the job failed.
- job_explanation: Any additional detail for jobs that failed to execute properly.
- forks: Number of forks executed for this job.
= workflow_job_template_node_table.csv
The workflow_job_template_node_table.csv file provides information on the nodes defined in workflow job templates on the system.
Each line contains the following fields for a worfklow job template node:
- id: Node id.
- created: When the node was created.
- modified: When the node was last updated.
- unified_job_template_id: The id of the job template, project, inventory, or other parent resource for this node.
- workflow_job_template_id: The workflow job template that contains this node.
- inventory_id: The inventory used by this node.
- success_nodes: Nodes that are triggered after this node succeeds.
- failure_nodes: Nodes that are triggered after this node fails.
- always_nodes: Nodes that always are triggered after this node finishes.
- all_parents_must_converge: Whether this node requires all its parent conditions satisfied to start.
= workflow_job_node_table.csv
The workflow_job_node_table.csv provides information on the jobs that have been executed as part of a workflow on the system.
Each line contains the following fields for a job run as part of a workflow:
- id: Node id.
- created: When the node was created.
- modified: When the node was last updated.
- job_id: The job id for the job run for this node.
- unified_job_template_id: The id of the job template, project, inventory, or other parent resource for this node.
- workflow_job_template_id: The workflow job template that contains this node.
- inventory_id: The inventory used by this node.
- success_nodes: Nodes that are triggered after this node succeeds.
- failure_nodes: Nodes that are triggered after this node fails.
- always_nodes: Nodes that always are triggered after this node finishes.
- do_not_run: Nodes that were not run in the workflow due to their start conditions not being triggered.
- all_parents_must_converge: Whether this node requires all its parent conditions satisfied to start.
= events_table.csv
The events_table.csv file provides information on all job events from all job runs in the system.
Each line contains the following fields for a job event:
- id: Event id.
- uuid: Event UUID.
- created: When the event was created.
- parent_uuid: The parent UUID for this event, if any.
- event: The Ansible event type.
-
task_action: The module associated with this event, if any (such as
commandoryum). -
failed: Whether the event returned
failed. -
changed: Whether the event returned
changed. - playbook: Playbook associated with the event.
- play: Play name from playbook.
- task: Task name from playbook.
- role: Role name from playbook.
- job_id: Id of the job this event is from.
- host_id: Id of the host this event is associated with, if any.
- host_name: Name of the host this event is associated with, if any.
- start: Start time of the task.
- end: End time of the task.
- duration: Duration of the task.
- warnings: Any warnings from the task or module.
- deprecations: Any deprecation warnings from the task or module.
= Analytics Reports
Reports for data collected are available through console.redhat.com.
Other Automation Analytics data currently available and accessible through the platform UI include the following:
Automation Calculator is a view-only version of the Automation Calculator utility that shows a report that represents (possible) savings to the subscriber.
Host Metrics is an analytics report collected for host data such as, when they were first automated, when they were most recently automated, how many times they were automated, and how many times each host has been deleted.
Subscription Usage reports the historical usage of your subscription. Subscription capacity and licenses consumed per month are displayed, with the ability to filter by the last year, two years, or three years.
= Troubleshooting automation controller
Useful troubleshooting information for automation controller.
= Unable to login to automation controller through HTTP
Access to automation controller is intentionally restricted through a secure protocol (HTTPS).
In cases where your configuration is set up to run an automation controller node behind a load balancer or proxy as "HTTP only", and you only want to access it without SSL (for troubleshooting, for example), you must add the following settings in the custom.py file located at /etc/tower/conf.d of your automation controller instance:
SESSION_COOKIE_SECURE = False CSRF_COOKIE_SECURE = False
SESSION_COOKIE_SECURE = False
CSRF_COOKIE_SECURE = False
If you change these settings to false it enables automation controller to manage cookies and login sessions when using the HTTP protocol. You must do this on every node of a cluster installation.
To apply the changes, run:
automation-controller-service restart
automation-controller-service restart
= Unable to run a job
If you are unable to run a job from a playbook, review the playbook YAML file. When importing a playbook, either manually or by a source control mechanism, keep in mind that the host definition is controlled by automation controller and should be set to hosts:all.
= Playbooks do not show up in the Job Template list
If your playbooks are not showing up in the Job Template list, check the following:
- Ensure that the playbook is valid YML and can be parsed by Ansible.
-
Ensure that the permissions and ownership of the project path (
/var/lib/awx/projects) is set up so that the "awx" system user can view the files. Run the following command to change the ownership:
chown awx -R /var/lib/awx/projects/
chown awx -R /var/lib/awx/projects/
= Playbook stays in pending
If you are attempting to run a playbook job and it stays in the Pending state indefinitely, try the following actions:
-
Ensure that all supervisor services are running through
supervisorctl status. -
Ensure that the
/var/ partitionhas more than 1 GB of space available. Jobs do not complete with insufficient space on the/var/partition. -
Run
automation-controller-service restarton the automation controller server.
If you continue to have issues, run sosreport as root on the automation controller server, then file a support request with the result.
= Reusing an external database causes installations to fail
When reusing an external database for clustered installations, you must manually clear the database before performing subsequent installations.
Instances have been reported where reusing the external database during subsequent installation of nodes causes installation failures.
Example
You perform a clustered installation. Then, you need to do this again and perform a second clustered installation reusing the same external database, only this subsequent installation failed.
When setting up an external database that has been used in a prior installation, you must manually clear the database used for the clustered node before any additional installations can succeed.
= Viewing private EC2 VPC instances in the automation controller inventory
By default, automation controller only shows instances in a VPC that have an Elastic IP (EIP) associated with them.
Procedure
-
From the navigation panel, select
. Select the inventory that has the Source set to Amazon EC2, and click the Source tab. In the Source Variables field, enter:
vpc_destination_variable: private_ip_address
vpc_destination_variable: private_ip_addressCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Click and trigger an update of the group.
Verification
Once you complete these steps, you can see your VPC instances.
Automation controller must be running inside the VPC with access to those instances if you want to configure them.
= Automation controller tips and tricks
- Use the automation controller CLI Tool
- Change the automation controller Admin Password
- Create an automation controller Admin from the commandline
- Set up a jump host to use with automation controller
- View Ansible outputs for JSON commands when using automation controller
- Locate and configure the Ansible configuration file
- View a listing of all ansible_ variables
- The ALLOW_JINJA_IN_EXTRA_VARS variable
- Configure the controllerhost hostname for notifications
- Launch Jobs with curl
- Filter instances returned by the dynamic inventory sources in automation controller
- Use an unreleased module from Ansible source with automation controller
- Connect to Windows with winrm
- Import existing inventory files and host/group vars into automation controller
= The automation controller CLI Tool
Automation controller has a full-featured command line interface.
For more information on configuration and use, see AWX Command Line Interface and the AWX manage utility section.
= Change the automation controller Administrator Password
During the installation process, you are prompted to enter an administrator password that is used for the admin superuser or system administrator created by automation controller. If you log in to the instance by using SSH, it tells you the default administrator password in the prompt.
If you need to change this password at any point, run the following command as root on the automation controller server:
awx-manage changepassword admin
awx-manage changepassword admin
Next, enter a new password. After that, the password you have entered works as the administrator password in the web UI.
To set policies at creation time for password validation using Django, see Django password policies.
= Create an automation controller Administrator from the command line
Occasionally you might find it helpful to create a system administrator (superuser) account from the command line.
To create a superuser, run the following command as root on the automation controller server and enter the administrator information as prompted:
awx-manage createsuperuser
awx-manage createsuperuser
= Configuring automation controller to use jump hosts connecting to managed nodes
Credentials supplied by automation controller do not flow to the jump host through ProxyCommand. They are only used for the end-node when the tunneled connection is set up.
= Configure a fixed user/keyfile in your SSH configuration file
You can configure a fixed user/keyfile in your SSH configuration file in the ProxyCommand definition that sets up the connection through the jump host.
Prerequisites
- Check whether all jump hosts are reachable from any node that establishes an SSH connection to the managed nodes, such as a Hybrid Controller or an Execution Node.
Procedure
Create an SSH configuration file
/var/lib/awx .ssh/configon each node with the following detailsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - The code specifies the configuration required to connect to the jump host 'jumphost.example.com'
- Automation controller establishes an SSH connection from each node to the managed nodes.
-
Example values
jumphost.example.com,jumphostuser,jumphostportand~/.ssh/id_rsamust be changed according to your environment Add a Host matching block to the already created SSH configuration file
/var/lib/awx/.ssh/config`on the node, for example:Host 192.0.* ...
Host 192.0.* ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
The
Host 192.0.*line indicates that all hosts in that subnet use the settings defined in that block. Specifically all hosts in that subnet are accessed using theProxyCommandsetting and connect throughjumphost.example.com If
Host *is used to indicate that all hosts connect through the specified proxy, ensure thatjumphost.example.comis excluded from that matching, for example:Host * !jumphost.example.com ...Host * !jumphost.example.com ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow Using the Red Hat Ansible Automation Platform UI
-
The
-
On the navigation panel, select
-
Click and add
/var/lib/awx .ssh:/home/runner/.ssh:0to the Paths to expose isolated jobs field. - Click to save your changes.
= Configuring jump hosts using Ansible Inventory variables
You can add a jump host to your automation controller instance through Inventory variables.
These variables can be set at either the inventory, group, or host level. Use this method if you want to control the use of jump hosts inside automation controller using the inventory.
Procedure
Go to your inventory and in the
variablesfield of whichever level you choose, add the following variables:ansible_user: <user_name> ansible_connection: ssh ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q <user_name>@<jump_server_name>"'
ansible_user: <user_name> ansible_connection: ssh ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q <user_name>@<jump_server_name>"'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
= View Ansible outputs for JSON commands when using automation controller
When working with automation controller, you can use the API to obtain the Ansible outputs for commands in JSON format.
To view the Ansible outputs, browse to https://<controller server name>/api/v2/jobs/<job_id>/job_events/
= Locate and configure the Ansible configuration file
This section describes how Automation controller uses the Ansible configuration file (ansible.cfg) and how to configure a custom one.
While Ansible does not require a configuration file, OS packages often include a default one in /etc/ansible/ansible.cfg for possible customization.
To use a custom ansible.cfg file, place it at the root of your project. Automation controller runs ansible-playbook from the root of the project directory, where it finds the custom ansible.cfg file.
An ansible.cfg file anywhere else in the project is ignored.
To learn which values you can use in this file, see Generating a sample ansible.cfg file in the Ansible documentation.
Using the defaults are acceptable for starting out, but you can configure the default module path or connection type here, as well as other things.
Automation controller overrides some ansible.cfg options. For example, automation controller stores the SSH ControlMaster sockets, the SSH agent socket, and any other per-job run items in a per-job temporary directory that is passed to the container used for job execution.
= View a listing of all ansible_ variables
You can view a listing of all ansible_ variables that automation controller gathers about managed hosts.
By default, Ansible gathers "facts" about the machines under its management, accessible in Playbooks and in templates.
To view all facts available about a machine, run the setup module as an ad hoc action:
ansible -m setup hostname
ansible -m setup hostname
This prints out a dictionary of all facts available for that particular host. For more information, see information-discovered-from-systems-facts in the Ansible documentation.
= The ALLOW_JINJA_IN_EXTRA_VARS variable
Setting ALLOW_JINJA_IN_EXTRA_VARS = template only works for saved job template extra variables.
Prompted variables and survey variables are excluded from the 'template'.
This parameter has three values:
-
Only On Template Definitionsto allow usage of Jinja saved directly on a job template definition (the default). -
Neverto disable all Jinja usage (recommended). -
Alwaysto always allow Jinja (strongly discouraged, but an option for prior compatibility).
This parameter is configurable in the Jobs Settings page of the automation controller UI.
= Configuring the controllerhost hostname for notifications
From the System settings page, you can replace https://controller.example.com in the Base URL of the Service field with your preferred hostname to change the notification hostname.
Refreshing your automation controller license also changes the notification hostname. New installations of automation controller need not set the hostname for notifications.
= Launching Jobs with curl
Launching jobs with the automation controller API is simple.
The following are some easy to follow examples using the curl tool.
Assuming that your Job Template ID is '1', your controller IP is 192.168.42.100, and that admin and awxsecret are valid login credentials, you can create a new job this way:
curl -f -k -H 'Content-Type: application/json' -XPOST \
--user admin:awxsecret \
https://192.168.42.100/api/v2/job_templates/1/launch/
curl -f -k -H 'Content-Type: application/json' -XPOST \
--user admin:awxsecret \
https://192.168.42.100/api/v2/job_templates/1/launch/
This returns a JSON object that you can parse and use to extract the 'id' field, which is the ID of the newly created job. You can also pass extra variables to the Job Template call, as in the following example:
curl -f -k -H 'Content-Type: application/json' -XPOST \
-d '{"extra_vars": "{\"foo\": \"bar\"}"}' \
--user admin:awxsecret https://192.168.42.100/api/v2/job_templates/1/launch/
curl -f -k -H 'Content-Type: application/json' -XPOST \
-d '{"extra_vars": "{\"foo\": \"bar\"}"}' \
--user admin:awxsecret https://192.168.42.100/api/v2/job_templates/1/launch/
The extra_vars parameter must be a string which contains JSON, not just a JSON dictionary. Use caution when escaping the quotes, etc.
= Filtering instances returned by the dynamic inventory sources in the controller
By default, the dynamic inventory sources in automation controller (such as AWS and Google) return all instances available to the cloud credentials being used. They are automatically joined into groups based on various attributes. For example, AWS instances are grouped by region, by tag name, value, and security groups. To target specific instances in your environment, write your playbooks so that they target the generated group names.
For example:
--- - hosts: tag_Name_webserver tasks: ...
---
- hosts: tag_Name_webserver
tasks:
...
You can also use the Limit field in the Job Template settings to limit a playbook run to a certain group, groups, hosts, or a combination of them. The syntax is the same as the --limit parameter on the ansible-playbook command line.
You can also create your own groups by copying the auto-generated groups into your custom groups. Make sure that the Overwrite option is disabled on your dynamic inventory source, otherwise subsequent synchronization operations delete and replace your custom groups.
= Use an unreleased module from Ansible source with automation controller
If there is a feature that is available in the latest Ansible core branch that you want to use with your automation controller system, making use of it in automation controller is simple.
First, determine which is the updated module you want to use from the available Ansible Core Modules or Ansible Extra Modules GitHub repositories.
Next, create a new directory, at the same directory level of your Ansible source playbooks, named /library.
When this is created, copy the module you want to use and drop it into the /library directory. It is consumed first by your system modules and can be removed once you have updated the stable version with your normal package manager.
= Use callback plugins with automation controller
Ansible has a flexible method of handling actions during playbook runs, called callback plugins. You can use these plugins with automation controller to do things such as notify services upon playbook runs or failures, or send emails after every playbook run.
For official documentation on the callback plugin architecture, see Developing plugins.
Automation controller does not support the stdout callback plugin because Ansible only permits one, and it is already being used for streaming event data.
You might also want to review some example plugins, which should be modified for site-specific purposes, such as those available at: https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/callback
To use these plugins, put the callback plugin .py file into a directory called /callback_plugins alongside your playbook in your automation controller Project. Then, specify their paths (one path per line) in the Ansible Callback Plugins field of the Job settings:
To have most callbacks shipped with Ansible applied globally, you must add them to the callback_whitelist section of your ansible.cfg.
If you have custom callbacks, see Enabling callback plugins.
= Connect to Windows with winrm
By default, automation controller attempts to ssh to hosts.
You must add the winrm connection information to the group variables to which the Windows hosts belong.
To get started, edit the Windows group in which the hosts reside and place the variables in the source or edit screen for the group.
To add winrm connection info:
-
Edit the properties for the selected group by clicking on the Edit
icon of the group name that contains the Windows servers. In the "variables" section, add your connection information as follows: ansible_connection: winrm
When complete, save your edits. If Ansible was previously attempting an SSH connection and failed, you should re-run the job template.
= Import existing inventory files and host/group vars into automation controller
To import an existing static inventory and the accompanying host and group variables into automation controller, your inventory must be in a structure similar to the following:
To import these hosts and vars, run the awx-manage command:
awx-manage inventory_import --source=inventory/ \ --inventory-name="My Controller Inventory"
awx-manage inventory_import --source=inventory/ \
--inventory-name="My Controller Inventory"
If you only have a single flat file of inventory, a file called ansible-hosts, for example, import it as follows:
awx-manage inventory_import --source=./ansible-hosts \ --inventory-name="My Controller Inventory"
awx-manage inventory_import --source=./ansible-hosts \
--inventory-name="My Controller Inventory"
In case of conflicts or to overwrite an inventory named "My Controller Inventory", run:
awx-manage inventory_import --source=inventory/ \ --inventory-name="My Controller Inventory" \ --overwrite --overwrite-vars
awx-manage inventory_import --source=inventory/ \
--inventory-name="My Controller Inventory" \
--overwrite --overwrite-vars
If you receive an error, such as:
ValueError: need more than 1 value to unpack
ValueError: need more than 1 value to unpack
Create a directory to hold the hosts file, as well as the group_vars:
mkdir -p inventory-directory/group_vars
mkdir -p inventory-directory/group_vars
Then, for each of the groups that have :vars listed, create a file called inventory-directory/group_vars/<groupname> and format the variables in YAML format.
The importer then handles the conversion correctly.