此内容没有您所选择的语言版本。

Chapter 16. The awx-manage Utility


Use the awx-manage utility to access detailed internal information of automation controller. Commands for awx-manage must run as the awx user only.

16.1. Inventory Import

awx-manage is a mechanism by which an automation controller administrator can import inventory directly into automation controller.

To use awx-manage properly, you must first create an inventory in automation controller to use as the destination for the import.

For help with awx-manage, run the following command:

awx-manage inventory_import [--help]

The inventory_import command synchronizes an automation controller inventory object with a text-based inventory file, dynamic inventory script, or a directory of one or more, as supported by core Ansible.

When running this command, specify either an --inventory-id or --inventory-name, and the path to the Ansible inventory source (--source).

awx-manage inventory_import --source=/ansible/inventory/ --inventory-id=1

By default, inventory data already stored in automation controller blends with data from the external source.

To use only the external data, specify --overwrite.

To specify that any existing hosts get variable data exclusively from the --source, specify --overwrite_vars.

The default behavior adds any new variables from the external source, overwriting keys that already exist, but preserving any variables that were not sourced from the external data source.

awx-manage inventory_import --source=/ansible/inventory/ --inventory-id=1 --overwrite

Note

Edits and additions to Inventory host variables persist beyond an inventory synchronization as long as --overwrite_vars is not set.

16.2. Cleanup of old data

awx-manage has a variety of commands used to clean old data from automation controller. Automation controller administrators can use the automation controller Management Jobs interface for access or use the command line.

  • awx-manage cleanup_jobs [--help]

This permanently deletes the job details and job output for jobs older than a specified number of days.

  • awx-manage cleanup_activitystream [--help]

This permanently deletes any [Activity stream] data older than a specific number of days.

16.3. Cluster management

<<<<<<< HEAD For more information about the awx-manage provision_instance and awx-manage deprovision_instance commands, see Clustering.

This section describes how to manage a automation controller cluster by provisioning and deprovisioning cluster instances. Automation controller uses the awx-manage command-line tool to manage cluster instances.

For more information about the awx-manage provision_instance and awx-manage deprovision_instance commands, see Clustering. >>>>>>> 2fea2f7c (DITA changes for controller guides (#4778))

Important

Do not run other awx-manage commands unless instructed by Ansible Support.

= Analytics gathering

Use this command to gather analytics on-demand outside of the predefined window (the default is 4 hours):

$ awx-manage gather_analytics --ship

For customers with disconnected environments who want to collect usage information about unique hosts automated across a time period, use this command:

awx-manage host_metric --since YYYY-MM-DD --json

The --since parameter is optional.

The --json flag specifies the output format and is optional.

= Backup and restore

You can backup and restore your system by using the Ansible Automation Platform setup playbook.

For more information, see the Backup and restore clustered environments section.

Note

When backing up Ansible Automation Platform, use the installation program that matches your currently installed version of Ansible Automation Platform.

When restoring Ansible Automation Platform, use the latest installation program available at the time of the restore. For example, if you are restoring a backup taken from version 2.6-1, use the latest 2.6-x installation program available at the time of the restore.

Backup and restore functionality only works with the PostgreSQL versions supported by your current Ansible Automation Platform version. For more information, see System requirements in Planning your installation.

The Ansible Automation Platform setup playbook is invoked as setup.sh from the path where you unpacked the platform installer tar file. It uses the same inventory file used by the install playbook. The setup script takes the following arguments for backing up and restoring:

  • -b: Perform a database backup rather than an installation.
  • -r: Perform a database restore rather than an installation.

As the root user, call setup.sh with the appropriate parameters and the Ansible Automation Platform backup or restored as configured:

root@localhost:~# ./setup.sh -b
root@localhost:~# ./setup.sh -r
Copy to Clipboard Toggle word wrap

Backup files are created on the same path that setup.sh script exists. You can change it by specifying the following EXTRA_VARS:

root@localhost:~# ./setup.sh -e 'backup_dest=/path/to/backup_dir/' -b
Copy to Clipboard Toggle word wrap

A default restore path is used unless you provide EXTRA_VARS with a non-default path, as shown in the following example:

root@localhost:~# ./setup.sh -e 'restore_backup_file=/path/to/nondefault/backup.tar.gz' -r
Copy to Clipboard Toggle word wrap

Optionally, you can override the inventory file used by passing it as an argument to the setup script:

setup.sh -i <inventory file>
Copy to Clipboard Toggle word wrap

= Backup and restore playbooks

automation controller includes playbooks to backup and restore your installation.

In addition to the install.yml file included with your setup.sh setup playbook, there are also backup.yml and restore.yml files.

These playbooks serve to backup and restore.

  • The overall backup, backs up:

    • The database
    • The SECRET_KEY file
  • The per-system backups include:

    • Custom configuration files
    • Manual projects
  • The restore backup restores the backed up files and data to a freshly installed and working second instance of automation controller.

When restoring your system, installation program checks to see that the backup file exists before beginning the restoration. If the backup file is not available, your restoration fails.

Note

Make sure that your automation controller hosts are properly set up with SSH keys, user or pass variables in the hosts file, and that the user has sudo access.

= Backup and restoration considerations

Consider the following points when you backup and restore your system:

Disk space
Review your disk space requirements to ensure you have enough room to backup configuration files, keys, other relevant files, and the database of the Ansible Automation Platform installation.
System credentials

Confirm you have the required system credentials when working with a local database or a remote database. On local systems, you might need root or sudo access, depending on how credentials are set up. On remote systems, you might need different credentials to grant you access to the remote system you are trying to backup or restore.

Note

The Ansible Automation Platform database backups are staged on each node at /var/backups/automation-platform through the variable backup_dir. You might need to mount a new volume to /var/backups or change the staging location with the variable backup_dir to prevent issues with disk space before running the ./setup.sh -b script.

Version
You must always use the most recent minor version of a release to backup or restore your Ansible Automation Platform installation version. For example, if the current platform version you are on is 2.0.x, only use the latest 2.0 installer.
File path
When using setup.sh to do a restore from the default restore file path, /var/lib/awx, -r is still required to do the restore, but it no longer accepts an argument. If a non-default restore file path is needed, you must provide this as an extra_var (root@localhost:~# ./setup.sh -e 'restore_backup_file=/path/to/nondefault/backup.tar.gz' -r).
Directory
If the backup file is placed in the same directory as the setup.sh installer, the restore playbook automatically locates the restore files. In this case, you do not need to use the restore_backup_file extra var to specify the location of the backup file.

= Backup and restore clustered environments

The procedure for backup and restore for a clustered environment is similar to a single install, except for some of the following considerations:

Note

For more information about installing clustered environments, see the Install and configure section.

  • If restoring to a new cluster, ensure that the old cluster is shut down before proceeding because they can conflict with each other when accessing the database.
  • Per-node backups are only restored to nodes bearing the same hostname as the backup.
  • When restoring to an existing cluster, the restore has the following:

    • A dump of the PostgreSQL database
    • UI artifacts, included in the database dump
    • An automation controller configuration (retrieved from /etc/tower)
    • An automation controller secret key
    • Manual projects

= Restore to a different cluster

When restoring a backup to a separate instance or cluster, manual projects and custom settings under /etc/tower are retained. Job output and job events are stored in the database, and therefore, not affected.

The restore process does not alter instance groups present before the restore. It does not introduce any new instance groups either. Restored automation controller resources that were associated to instance groups likely need to be reassigned to instance groups present on the new automation controller cluster.

= Usability Analytics and Data Collection

Usability data collection is included with automation controller to collect data to better understand how automation controller users interact with it.

Only users installing a trial of or a fresh installation of are opted-in for this data collection.

Automation controller collects user data automatically to help improve the product.

For information about setting up Automation Analytics, see Configuring Automation Analytics.

= Automation Analytics

When you imported your license for the first time, you were automatically opted in for the collection of data that powers Automation Analytics, a cloud service that is part of the Ansible Automation Platform subscription.

Important

For opt-in of Automation Analytics to have any effect, your instance of automation controller must be running on Red Hat Enterprise Linux.

As with Red Hat Insights, Automation Analytics is built to collect the minimum amount of data needed. No credential secrets, personal data, automation variables, or task output is gathered.

When you imported your license for the first time, you were automatically opted in to Automation Analytics. To configure or disable this feature, see Configuring Automation Analytics.

By default, the data is collected every four hours. When you enable this feature, data is collected up to a month in arrears (or until the previous collection). You can turn off this data collection at any time in the Miscellaneous System settings of the System configuration window.

This setting can also be enabled through the API by specifying INSIGHTS_TRACKING_STATE = true in either of these endpoints:

  • api/v2/settings/all
  • api/v2/settings/system

The Automation Analytics generated from this data collection can be found on the Red Hat Cloud Services portal.

Clusters data is the default view. This graph represents the number of job runs across all automation controller clusters over a period of time. The previous example shows a span of a week in a stacked bar-style chart that is organized by the number of jobs that ran successfully (in green) and jobs that failed (in red).

Alternatively, you can select a single cluster to view its job status information.

Job run status

This multi-line chart represents the number of job runs for a single automation controller cluster for a specified period of time. The preceding example shows a span of a week, organized by the number of successfully running jobs (in green) and jobs that failed (in red). You can specify the number of successful and failed job runs for a selected cluster over a span of one week, two weeks, and monthly increments.

On the clouds navigation panel, select Organization Statistics to view information for the following:

Note

The organization statistics page will be deprecated in a future release.

= Use by organization

The following chart represents the number of tasks run inside all jobs by a particular organization.

Tasks by organization

= Job runs by organization

This chart represents automation controller use across all automation controller clusters by organization, calculated by the number of jobs run by that organization.

Jobs by organization

= Organization status

This bar chart represents automation controller use by organization and date, which is calculated by the number of jobs run by that organization on a particular date.

Alternatively, you can specify to show the number of job runs per organization in one week, two weeks, and monthly increments.

Organization status

= Details of data collection

Automation Analytics collects the following classes of data from automation controller:

  • Basic configuration, such as which features are enabled, and what operating system is being used
  • Topology and status of the automation controller environment and hosts, including capacity and health
  • Counts of automation resources:

    • organizations, teams, and users
    • inventories and hosts
    • credentials (indexed by type)
    • projects (indexed by type)
    • templates
    • schedules
    • active sessions
    • running and pending jobs
  • Job execution details (start time, finish time, launch type, and success)
  • Automation task details (success, host id, playbook/role, task name, and module used)

You can use awx-manage gather_analytics (without --ship) to inspect the data that automation controller sends, so that you can satisfy your data collection concerns. This creates a .tar file that contains the analytics data that is sent to Red Hat.

This file contains several JSON and CSV files. Each file contains a different set of analytics data.

= manifest.json

manifest.json is the manifest of the analytics data. It describes each file included in the collection, and what version of the schema for that file is included.

The following is an example manifest.json file:

  "config.json": "1.1",
  "counts.json": "1.0",
  "cred_type_counts.json": "1.0",
  "events_table.csv": "1.1",
  "instance_info.json": "1.0",
  "inventory_counts.json": "1.2",
  "job_counts.json": "1.0",
  "job_instance_counts.json": "1.0",
  "org_counts.json": "1.0",
  "projects_by_scm_type.json": "1.0",
  "query_info.json": "1.0",
  "unified_job_template_table.csv": "1.0",
  "unified_jobs_table.csv": "1.0",
  "workflow_job_node_table.csv": "1.0",
  "workflow_job_template_node_table.csv": "1.0"
}
Copy to Clipboard Toggle word wrap

= config.json

The config.json file contains a subset of the configuration endpoint /api/v2/config from the cluster. An example config.json is:

{
    "ansible_version": "2.9.1",
    "authentication_backends": [
        "social_core.backends.azuread.AzureADOAuth2",
        "django.contrib.auth.backends.ModelBackend"
    ],
    "external_logger_enabled": true,
    "external_logger_type": "splunk",
    "free_instances": 1234,
    "install_uuid": "d3d497f7-9d07-43ab-b8de-9d5cc9752b7c",
    "instance_uuid": "bed08c6b-19cc-4a49-bc9e-82c33936e91b",
    "license_expiry": 34937373,
    "license_type": "enterprise",
    "logging_aggregators": [
        "awx",
        "activity_stream",
        "job_events",
        "system_tracking"
    ],
    "pendo_tracking": "detailed",
    "platform": {
        "dist": [
            "redhat",
            "7.4",
            "Maipo"
        ],
        "release": "3.10.0-693.el7.x86_64",
        "system": "Linux",
        "type": "traditional"
    },
    "total_licensed_instances": 2500,
    "controller_url_base": "https://ansible.rhdemo.io",
    "controller_version": "3.6.3"
}
Copy to Clipboard Toggle word wrap

Which includes the following fields:

  • ansible_version: The system Ansible version on the host
  • authentication_backends: The user authentication backends that are available. For more information, see Configuring an authentication type.
  • external_logger_enabled: Whether external logging is enabled
  • external_logger_type: What logging backend is in use if enabled. For more information, see Logging and aggregation.
  • logging_aggregators: What logging categories are sent to external logging. For more information, see Logging and aggregation.
  • free_instances: How many hosts are available in the license. A value of zero means the cluster is fully consuming its license.
  • install_uuid: A UUID for the installation (identical for all cluster nodes)
  • instance_uuid: A UUID for the instance (different for each cluster node)
  • license_expiry: Time to expiry of the license, in seconds
  • license_type: The type of the license (should be 'enterprise' for most cases)
  • pendo_tracking: State of usability_data_collection
  • platform: The operating system the cluster is running on
  • total_licensed_instances: The total number of hosts in the license
  • controller_url_base: The base URL for the cluster used by clients (shown in Automation Analytics)
  • controller_version: Version of the software on the cluster

= instance_info.json

Automation controller generates an instance_info.json file that provides detailed information about each instance in the cluster. This file is typically located in the /var/lib/<controller_service_name>/instance_info.json path on the controller node.

The instance_info.json file contains detailed information on the instances that make up the cluster, organized by instance UUID.

The following is an example instance_info.json file:

{
    "bed08c6b-19cc-4a49-bc9e-82c33936e91b": {
        "capacity": 57,
        "cpu": 2,
        "enabled": true,
        "last_isolated_check": "2019-08-15T14:48:58.553005+00:00",
        "managed_by_policy": true,
        "memory": 8201400320,
        "uuid": "bed08c6b-19cc-4a49-bc9e-82c33936e91b",
        "version": "3.6.3"
    }
    "c0a2a215-0e33-419a-92f5-e3a0f59bfaee": {
        "capacity": 57,
        "cpu": 2,
        "enabled": true,
        "last_isolated_check": "2019-08-15T14:48:58.553005+00:00",
        "managed_by_policy": true,
        "memory": 8201400320,
        "uuid": "c0a2a215-0e33-419a-92f5-e3a0f59bfaee",
        "version": "3.6.3"
    }
}
Copy to Clipboard Toggle word wrap

Which includes the following fields:

  • capacity: The capacity of the instance for executing tasks.
  • cpu: Processor cores for the instance
  • memory: Memory for the instance
  • enabled: Whether the instance is enabled and accepting tasks
  • managed_by_policy: Whether the instance’s membership in instance groups is managed by policy, or manually managed
  • version: Version of the software on the instance

= counts.json

The counts.json file contains the total number of objects for each relevant category in a cluster.

The following is an example counts.json file:

{
    "active_anonymous_sessions": 1,
    "active_host_count": 682,
    "active_sessions": 2,
    "active_user_sessions": 1,
    "credential": 38,
    "custom_inventory_script": 2,
    "custom_virtualenvs": 4,
    "host": 697,
    "inventories": {
        "normal": 20,
        "smart": 1
    },
    "inventory": 21,
    "job_template": 78,
    "notification_template": 5,
    "organization": 10,
    "pending_jobs": 0,
    "project": 20,
    "running_jobs": 0,
    "schedule": 16,
    "team": 5,
    "unified_job": 7073,
    "user": 28,
    "workflow_job_template": 15
}
Copy to Clipboard Toggle word wrap

Each entry in this file is for the corresponding API objects in /api/v2, with the exception of the active session counts.

= org_counts.json

The org_counts.json file contains information on each organization in the cluster, and the number of users and teams associated with that organization.

The following is an example org_counts.json file:

{
    "1": {
        "name": "Operations",
        "teams": 5,
        "users": 17
    },
    "2": {
        "name": "Development",
        "teams": 27,
        "users": 154
    },
    "3": {
        "name": "Networking",
        "teams": 3,
        "users": 28
    }
}
Copy to Clipboard Toggle word wrap

= cred_type_counts.json

Automation controller generates a cred_type_counts.json file that provides a summary of credential types and their counts in the cluster. This file is typically located in the /var/lib/<controller_service_name>/cred_type_counts.json path on the controller node.

The cred_type_counts.json file has information about different credential types in the cluster, and how many credentials exist for each type.

The following is an example cred_type_counts.json file:

{
    "1": {
        "credential_count": 15,
        "managed_by_controller": true,
        "name": "Machine"
    },
    "2": {
        "credential_count": 2,
        "managed_by_controller": true,
        "name": "Source Control"
    },
    "3": {
        "credential_count": 3,
        "managed_by_controller": true,
        "name": "Vault"
    },
    "4": {
        "credential_count": 0,
        "managed_by_controller": true,
        "name": "Network"
    },
    "5": {
        "credential_count": 6,
        "managed_by_controller": true,
        "name": "Amazon Web Services"
    },
    "6": {
        "credential_count": 0,
        "managed_by_controller": true,
        "name": "OpenStack"
    },
Copy to Clipboard Toggle word wrap

= inventory_counts.json

The inventory_counts.json file contains information on the different inventories in the cluster.

The following is an example inventory_counts.json file:

{
    "1": {
        "hosts": 211,
        "kind": "",
        "name": "AWS Inventory",
        "source_list": [
            {
                "name": "AWS",
                "num_hosts": 211,
                "source": "ec2"
            }
        ],
        "sources": 1
    },
    "2": {
        "hosts": 15,
        "kind": "",
        "name": "Manual inventory",
        "source_list": [],
        "sources": 0
    },
    "3": {
        "hosts": 25,
        "kind": "",
        "name": "SCM inventory - test repo",
        "source_list": [
            {
                "name": "Git source",
                "num_hosts": 25,
                "source": "scm"
            }
        ],
        "sources": 1
    }
    "4": {
        "num_hosts": 5,
        "kind": "smart",
        "name": "Filtered AWS inventory",
        "source_list": [],
        "sources": 0
    }
}
Copy to Clipboard Toggle word wrap

= projects_by_scm_type.json

The projects_by_scm_type.json file provides a breakdown of all projects in the cluster, by source control type.

The following is an example projects_by_scm_type.json file:

{
    "git": 27,
    "hg": 0,
    "insights": 1,
    "manual": 0,
    "svn": 0
}
Copy to Clipboard Toggle word wrap

= query_info.json

The query_info.json file provides details on when and how the data collection happened.

The following is an example query_info.json file:

{
    "collection_type": "manual",
    "current_time": "2019-11-22 20:10:27.751267+00:00",
    "last_run": "2019-11-22 20:03:40.361225+00:00"
}
Copy to Clipboard Toggle word wrap

collection_type is one of manual or automatic.

= job_counts.json

The job_counts.json file provides details on the job history of the cluster, describing both how jobs were launched, and what their finishing status is.

The following is an example job_counts.json file:

    "launch_type": {
        "dependency": 3628,
        "manual": 799,
        "relaunch": 6,
        "scheduled": 1286,
        "scm": 6,
        "workflow": 1348
    },
    "status": {
        "canceled": 7,
        "failed": 108,
        "successful": 6958
    },
    "total_jobs": 7073
}
Copy to Clipboard Toggle word wrap

= job_instance_counts.json

The job_instance_counts.json file provides the same detail as job_counts.json, broken down by instance.

The following is an example job_instance_counts.json file:

{
    "localhost": {
        "launch_type": {
            "dependency": 3628,
            "manual": 770,
            "relaunch": 3,
            "scheduled": 1009,
            "scm": 6,
            "workflow": 1336
        },
        "status": {
            "canceled": 2,
            "failed": 60,
            "successful": 6690
        }
    }
}
Copy to Clipboard Toggle word wrap

Note that instances in this file are by hostname, not by UUID as they are in instance_info.

= unified_job_template_table.csv

The unified_job_template_table.csv file provides information on job templates in the system. Each line contains the following fields for the job template:

  • id: Job template id.
  • name: Job template name.
  • polymorphic_ctype_id: The id of the type of template it is.
  • model: The name of the polymorphic_ctype_id for the template. Examples include project, systemjobtemplate, jobtemplate, inventorysource, and workflowjobtemplate.
  • created: When the template was created.
  • modified: When the template was last updated.
  • created_by_id: The userid that created the template. Blank if done by the system.
  • modified_by_id: The userid that last modified the template. Blank if done by the system.
  • current_job_id: Currently executing job id for the template, if any.
  • last_job_id: Last execution of the job.
  • last_job_run: Time of last execution of the job.
  • last_job_failed: Whether the last_job_id failed.
  • status: Status of last_job_id.
  • next_job_run: Next scheduled execution of the template, if any.
  • next_schedule_id: Schedule id for next_job_run, if any.

= unified_jobs_table.csv

The unified_jobs_table.csv file provides information on jobs run by the system.

Each line contains the following fields for a job:

  • id: Job id.
  • name: Job name (from the template).
  • polymorphic_ctype_id: The id of the type of job it is.
  • model: The name of the polymorphic_ctype_id for the job. Examples include job and workflow.
  • organization_id: The organization ID for the job.
  • organization_name: Name for the organization_id.
  • created: When the job record was created.
  • started: When the job started executing.
  • finished: When the job finished.
  • elapsed: Elapsed time for the job in seconds.
  • unified_job_template_id: The template for this job.
  • launch_type: One of manual, scheduled, relaunched, scm, workflow, or dependency.
  • schedule_id: The id of the schedule that launched the job, if any,
  • instance_group_id: The instance group that executed the job.
  • execution_node: The node that executed the job (hostname, not UUID).
  • controller_node: The automation controller node for the job, if run as an isolated job, or in a container group.
  • cancel_flag: Whether the job was canceled.
  • status: Status of the job.
  • failed: Whether the job failed.
  • job_explanation: Any additional detail for jobs that failed to execute properly.
  • forks: Number of forks executed for this job.

= workflow_job_template_node_table.csv

The workflow_job_template_node_table.csv file provides information on the nodes defined in workflow job templates on the system.

Each line contains the following fields for a worfklow job template node:

  • id: Node id.
  • created: When the node was created.
  • modified: When the node was last updated.
  • unified_job_template_id: The id of the job template, project, inventory, or other parent resource for this node.
  • workflow_job_template_id: The workflow job template that contains this node.
  • inventory_id: The inventory used by this node.
  • success_nodes: Nodes that are triggered after this node succeeds.
  • failure_nodes: Nodes that are triggered after this node fails.
  • always_nodes: Nodes that always are triggered after this node finishes.
  • all_parents_must_converge: Whether this node requires all its parent conditions satisfied to start.

= workflow_job_node_table.csv

The workflow_job_node_table.csv provides information on the jobs that have been executed as part of a workflow on the system.

Each line contains the following fields for a job run as part of a workflow:

  • id: Node id.
  • created: When the node was created.
  • modified: When the node was last updated.
  • job_id: The job id for the job run for this node.
  • unified_job_template_id: The id of the job template, project, inventory, or other parent resource for this node.
  • workflow_job_template_id: The workflow job template that contains this node.
  • inventory_id: The inventory used by this node.
  • success_nodes: Nodes that are triggered after this node succeeds.
  • failure_nodes: Nodes that are triggered after this node fails.
  • always_nodes: Nodes that always are triggered after this node finishes.
  • do_not_run: Nodes that were not run in the workflow due to their start conditions not being triggered.
  • all_parents_must_converge: Whether this node requires all its parent conditions satisfied to start.

= events_table.csv

The events_table.csv file provides information on all job events from all job runs in the system.

Each line contains the following fields for a job event:

  • id: Event id.
  • uuid: Event UUID.
  • created: When the event was created.
  • parent_uuid: The parent UUID for this event, if any.
  • event: The Ansible event type.
  • task_action: The module associated with this event, if any (such as command or yum).
  • failed: Whether the event returned failed.
  • changed: Whether the event returned changed.
  • playbook: Playbook associated with the event.
  • play: Play name from playbook.
  • task: Task name from playbook.
  • role: Role name from playbook.
  • job_id: Id of the job this event is from.
  • host_id: Id of the host this event is associated with, if any.
  • host_name: Name of the host this event is associated with, if any.
  • start: Start time of the task.
  • end: End time of the task.
  • duration: Duration of the task.
  • warnings: Any warnings from the task or module.
  • deprecations: Any deprecation warnings from the task or module.

= Analytics Reports

Reports for data collected are available through console.redhat.com.

Other Automation Analytics data currently available and accessible through the platform UI include the following:

Automation Calculator is a view-only version of the Automation Calculator utility that shows a report that represents (possible) savings to the subscriber.

Automation calculator

Host Metrics is an analytics report collected for host data such as, when they were first automated, when they were most recently automated, how many times they were automated, and how many times each host has been deleted.

Subscription Usage reports the historical usage of your subscription. Subscription capacity and licenses consumed per month are displayed, with the ability to filter by the last year, two years, or three years.

= Troubleshooting automation controller

Useful troubleshooting information for automation controller.

= Unable to login to automation controller through HTTP

Access to automation controller is intentionally restricted through a secure protocol (HTTPS).

In cases where your configuration is set up to run an automation controller node behind a load balancer or proxy as "HTTP only", and you only want to access it without SSL (for troubleshooting, for example), you must add the following settings in the custom.py file located at /etc/tower/conf.d of your automation controller instance:

SESSION_COOKIE_SECURE = False
CSRF_COOKIE_SECURE = False
Copy to Clipboard Toggle word wrap

If you change these settings to false it enables automation controller to manage cookies and login sessions when using the HTTP protocol. You must do this on every node of a cluster installation.

To apply the changes, run:

automation-controller-service restart
Copy to Clipboard Toggle word wrap

= Unable to run a job

If you are unable to run a job from a playbook, review the playbook YAML file. When importing a playbook, either manually or by a source control mechanism, keep in mind that the host definition is controlled by automation controller and should be set to hosts:all.

= Playbooks do not show up in the Job Template list

If your playbooks are not showing up in the Job Template list, check the following:

  • Ensure that the playbook is valid YML and can be parsed by Ansible.
  • Ensure that the permissions and ownership of the project path (/var/lib/awx/projects) is set up so that the "awx" system user can view the files. Run the following command to change the ownership:
chown awx -R /var/lib/awx/projects/
Copy to Clipboard Toggle word wrap

= Playbook stays in pending

If you are attempting to run a playbook job and it stays in the Pending state indefinitely, try the following actions:

  • Ensure that all supervisor services are running through supervisorctl status.
  • Ensure that the /var/ partition has more than 1 GB of space available. Jobs do not complete with insufficient space on the /var/ partition.
  • Run automation-controller-service restart on the automation controller server.

If you continue to have issues, run sosreport as root on the automation controller server, then file a support request with the result.

= Reusing an external database causes installations to fail

When reusing an external database for clustered installations, you must manually clear the database before performing subsequent installations.

Instances have been reported where reusing the external database during subsequent installation of nodes causes installation failures.

Example

You perform a clustered installation. Then, you need to do this again and perform a second clustered installation reusing the same external database, only this subsequent installation failed.

When setting up an external database that has been used in a prior installation, you must manually clear the database used for the clustered node before any additional installations can succeed.

= Viewing private EC2 VPC instances in the automation controller inventory

By default, automation controller only shows instances in a VPC that have an Elastic IP (EIP) associated with them.

Procedure

  1. From the navigation panel, select Automation Execution Infrastructure Inventories.
  2. Select the inventory that has the Source set to Amazon EC2, and click the Source tab. In the Source Variables field, enter:

    vpc_destination_variable: private_ip_address
    Copy to Clipboard Toggle word wrap
  3. Click Save and trigger an update of the group.

Verification

Once you complete these steps, you can see your VPC instances.

Note

Automation controller must be running inside the VPC with access to those instances if you want to configure them.

= Automation controller tips and tricks

= The automation controller CLI Tool

Automation controller has a full-featured command line interface.

For more information on configuration and use, see AWX Command Line Interface and the AWX manage utility section.

= Change the automation controller Administrator Password

During the installation process, you are prompted to enter an administrator password that is used for the admin superuser or system administrator created by automation controller. If you log in to the instance by using SSH, it tells you the default administrator password in the prompt.

If you need to change this password at any point, run the following command as root on the automation controller server:

awx-manage changepassword admin
Copy to Clipboard Toggle word wrap

Next, enter a new password. After that, the password you have entered works as the administrator password in the web UI.

To set policies at creation time for password validation using Django, see Django password policies.

= Create an automation controller Administrator from the command line

Occasionally you might find it helpful to create a system administrator (superuser) account from the command line.

To create a superuser, run the following command as root on the automation controller server and enter the administrator information as prompted:

awx-manage createsuperuser
Copy to Clipboard Toggle word wrap

= Configuring automation controller to use jump hosts connecting to managed nodes

Credentials supplied by automation controller do not flow to the jump host through ProxyCommand. They are only used for the end-node when the tunneled connection is set up.

= Configure a fixed user/keyfile in your SSH configuration file

You can configure a fixed user/keyfile in your SSH configuration file in the ProxyCommand definition that sets up the connection through the jump host.

Prerequisites

  • Check whether all jump hosts are reachable from any node that establishes an SSH connection to the managed nodes, such as a Hybrid Controller or an Execution Node.

Procedure

  1. Create an SSH configuration file /var/lib/awx .ssh/config on each node with the following details

    Host jumphost.example.com
        Hostname jumphost.example.com
        User <jumphostuser>
        Port <jumphostport>
        IdentityFile ~.ssh/id_rsa
        StrictHostKeyChecking no
        ProxyCommand ssh -W %h:%p jumphost.example.com
    Copy to Clipboard Toggle word wrap
    • The code specifies the configuration required to connect to the jump host 'jumphost.example.com'
    • Automation controller establishes an SSH connection from each node to the managed nodes.
    • Example values jumphost.example.com, jumphostuser, jumphostport and ~/.ssh/id_rsa must be changed according to your environment
    • Add a Host matching block to the already created SSH configuration file /var/lib/awx/.ssh/config` on the node, for example:

      Host 192.0.*
         ...
      Copy to Clipboard Toggle word wrap
      • The Host 192.0.* line indicates that all hosts in that subnet use the settings defined in that block. Specifically all hosts in that subnet are accessed using the ProxyCommand setting and connect through jumphost.example.com
      • If Host * is used to indicate that all hosts connect through the specified proxy, ensure that jumphost.example.com is excluded from that matching, for example:

        Host * !jumphost.example.com
            ...
        Copy to Clipboard Toggle word wrap

        Using the Red Hat Ansible Automation Platform UI

  2. On the navigation panel, select Settings Automation Execution Job
  3. Click Edit and add /var/lib/awx .ssh:/home/runner/.ssh:0 to the Paths to expose isolated jobs field.
  4. Click Save to save your changes.

= Configuring jump hosts using Ansible Inventory variables

You can add a jump host to your automation controller instance through Inventory variables.

These variables can be set at either the inventory, group, or host level. Use this method if you want to control the use of jump hosts inside automation controller using the inventory.

Procedure

  • Go to your inventory and in the variables field of whichever level you choose, add the following variables:

    ansible_user: <user_name>
    ansible_connection: ssh
    ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q <user_name>@<jump_server_name>"'
    Copy to Clipboard Toggle word wrap

= View Ansible outputs for JSON commands when using automation controller

When working with automation controller, you can use the API to obtain the Ansible outputs for commands in JSON format.

To view the Ansible outputs, browse to https://<controller server name>/api/v2/jobs/<job_id>/job_events/

= Locate and configure the Ansible configuration file

This section describes how Automation controller uses the Ansible configuration file (ansible.cfg) and how to configure a custom one.

While Ansible does not require a configuration file, OS packages often include a default one in /etc/ansible/ansible.cfg for possible customization.

To use a custom ansible.cfg file, place it at the root of your project. Automation controller runs ansible-playbook from the root of the project directory, where it finds the custom ansible.cfg file.

Note

An ansible.cfg file anywhere else in the project is ignored.

To learn which values you can use in this file, see Generating a sample ansible.cfg file in the Ansible documentation.

Using the defaults are acceptable for starting out, but you can configure the default module path or connection type here, as well as other things.

Automation controller overrides some ansible.cfg options. For example, automation controller stores the SSH ControlMaster sockets, the SSH agent socket, and any other per-job run items in a per-job temporary directory that is passed to the container used for job execution.

= View a listing of all ansible_ variables

You can view a listing of all ansible_ variables that automation controller gathers about managed hosts.

By default, Ansible gathers "facts" about the machines under its management, accessible in Playbooks and in templates.

To view all facts available about a machine, run the setup module as an ad hoc action:

ansible -m setup hostname
Copy to Clipboard Toggle word wrap

This prints out a dictionary of all facts available for that particular host. For more information, see information-discovered-from-systems-facts in the Ansible documentation.

= The ALLOW_JINJA_IN_EXTRA_VARS variable

Setting ALLOW_JINJA_IN_EXTRA_VARS = template only works for saved job template extra variables.

Prompted variables and survey variables are excluded from the 'template'.

This parameter has three values:

  • Only On Template Definitions to allow usage of Jinja saved directly on a job template definition (the default).
  • Never to disable all Jinja usage (recommended).
  • Always to always allow Jinja (strongly discouraged, but an option for prior compatibility).

This parameter is configurable in the Jobs Settings page of the automation controller UI.

= Configuring the controllerhost hostname for notifications

From the System settings page, you can replace https://controller.example.com in the Base URL of the Service field with your preferred hostname to change the notification hostname.

Refreshing your automation controller license also changes the notification hostname. New installations of automation controller need not set the hostname for notifications.

= Launching Jobs with curl

Launching jobs with the automation controller API is simple.

The following are some easy to follow examples using the curl tool.

Assuming that your Job Template ID is '1', your controller IP is 192.168.42.100, and that admin and awxsecret are valid login credentials, you can create a new job this way:

curl -f -k -H 'Content-Type: application/json' -XPOST \
    --user admin:awxsecret \
    https://192.168.42.100/api/v2/job_templates/1/launch/
Copy to Clipboard Toggle word wrap

This returns a JSON object that you can parse and use to extract the 'id' field, which is the ID of the newly created job. You can also pass extra variables to the Job Template call, as in the following example:

curl -f -k -H 'Content-Type: application/json' -XPOST \
    -d '{"extra_vars": "{\"foo\": \"bar\"}"}' \
    --user admin:awxsecret https://192.168.42.100/api/v2/job_templates/1/launch/
Copy to Clipboard Toggle word wrap
Note

The extra_vars parameter must be a string which contains JSON, not just a JSON dictionary. Use caution when escaping the quotes, etc.

= Filtering instances returned by the dynamic inventory sources in the controller

By default, the dynamic inventory sources in automation controller (such as AWS and Google) return all instances available to the cloud credentials being used. They are automatically joined into groups based on various attributes. For example, AWS instances are grouped by region, by tag name, value, and security groups. To target specific instances in your environment, write your playbooks so that they target the generated group names.

For example:

---
- hosts: tag_Name_webserver
  tasks:
  ...
Copy to Clipboard Toggle word wrap

You can also use the Limit field in the Job Template settings to limit a playbook run to a certain group, groups, hosts, or a combination of them. The syntax is the same as the --limit parameter on the ansible-playbook command line.

You can also create your own groups by copying the auto-generated groups into your custom groups. Make sure that the Overwrite option is disabled on your dynamic inventory source, otherwise subsequent synchronization operations delete and replace your custom groups.

= Use an unreleased module from Ansible source with automation controller

If there is a feature that is available in the latest Ansible core branch that you want to use with your automation controller system, making use of it in automation controller is simple.

First, determine which is the updated module you want to use from the available Ansible Core Modules or Ansible Extra Modules GitHub repositories.

Next, create a new directory, at the same directory level of your Ansible source playbooks, named /library.

When this is created, copy the module you want to use and drop it into the /library directory. It is consumed first by your system modules and can be removed once you have updated the stable version with your normal package manager.

= Use callback plugins with automation controller

Ansible has a flexible method of handling actions during playbook runs, called callback plugins. You can use these plugins with automation controller to do things such as notify services upon playbook runs or failures, or send emails after every playbook run.

For official documentation on the callback plugin architecture, see Developing plugins.

Note

Automation controller does not support the stdout callback plugin because Ansible only permits one, and it is already being used for streaming event data.

You might also want to review some example plugins, which should be modified for site-specific purposes, such as those available at: https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/callback

To use these plugins, put the callback plugin .py file into a directory called /callback_plugins alongside your playbook in your automation controller Project. Then, specify their paths (one path per line) in the Ansible Callback Plugins field of the Job settings:

Ansible Callback plugins

Note

To have most callbacks shipped with Ansible applied globally, you must add them to the callback_whitelist section of your ansible.cfg.

If you have custom callbacks, see Enabling callback plugins.

= Connect to Windows with winrm

By default, automation controller attempts to ssh to hosts.

You must add the winrm connection information to the group variables to which the Windows hosts belong.

To get started, edit the Windows group in which the hosts reside and place the variables in the source or edit screen for the group.

To add winrm connection info:

  • Edit the properties for the selected group by clicking on the Edit Edit icon of the group name that contains the Windows servers. In the "variables" section, add your connection information as follows: ansible_connection: winrm

When complete, save your edits. If Ansible was previously attempting an SSH connection and failed, you should re-run the job template.

= Import existing inventory files and host/group vars into automation controller

To import an existing static inventory and the accompanying host and group variables into automation controller, your inventory must be in a structure similar to the following:

inventory/
|-- group_vars
|   `-- mygroup
|-- host_vars
|   `-- myhost
`-- hosts
Copy to Clipboard Toggle word wrap

To import these hosts and vars, run the awx-manage command:

awx-manage inventory_import --source=inventory/ \
  --inventory-name="My Controller Inventory"
Copy to Clipboard Toggle word wrap

If you only have a single flat file of inventory, a file called ansible-hosts, for example, import it as follows:

awx-manage inventory_import --source=./ansible-hosts \
  --inventory-name="My Controller Inventory"
Copy to Clipboard Toggle word wrap

In case of conflicts or to overwrite an inventory named "My Controller Inventory", run:

awx-manage inventory_import --source=inventory/ \
  --inventory-name="My Controller Inventory" \
  --overwrite --overwrite-vars
Copy to Clipboard Toggle word wrap

If you receive an error, such as:

ValueError: need more than 1 value to unpack
Copy to Clipboard Toggle word wrap

Create a directory to hold the hosts file, as well as the group_vars:

mkdir -p inventory-directory/group_vars
Copy to Clipboard Toggle word wrap

Then, for each of the groups that have :vars listed, create a file called inventory-directory/group_vars/<groupname> and format the variables in YAML format.

The importer then handles the conversion correctly.

返回顶部
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2025 Red Hat