Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 2. Metrics file locations

download PDF

Reporting metrics to Red Hat is a requirement. Logging metrics for your automation jobs is automatically enabled when you install Ansible SDK. You cannot disable it.

Every time an automation job runs, a new tarball is created. You are responsible for scraping the data from the storage location and for monitoring the size of the directory.

You can customize the metrics storage location for each Python file that runs a playbook, or you can use the default location.

2.1. Default location for metrics files

When you install Ansible SDK, the default metrics storage location is set to the ~/.ansible/metrics directory.

After an automation job is complete, the metrics are written to a tarball in the directory. Ansible SDK creates the directory if it does not already exist.

2.2. Customizing the metrics storage location

You can specify the path to the directory to store your metrics files in the Python file that runs your playbook.

You can set a different directory path for every Python automation job file, or you can store the tarballs for multiple jobs in one directory. If you do not set the path in a Python file, the tarballs for the jobs that it runs will be saved in the default directory (~/.ansible/metrics).

Procedure

  1. Decide on a location on your file system to store the metrics data. Ensure that the location is readable and writable. Ansible SDK creates the directory if it does not already exist.
  2. In the job_options in the main() function of your Python file, set the metrics_output_path parameter to the directory where the tarballs are to be stored.

    In the following example, the metrics files are stored in the /tmp/metrics directory after the pb.yml playbook has been executed:

    async def main():
        executor = AnsibleSubprocessJobExecutor()
        executor_options = AnsibleSubprocessJobOptions()
        job_options = {
            'playbook': 'pb.yml',
            # Change the default job-related data path
            'metrics_output_path': '/tmp/metrics',
        }

2.3. Viewing metrics files

After an automation job has completed, navigate to the directory that you specified for storing the data and list the files.

The data for the newly-completed job is contained in a tarball file whose name begins with the date and time that the automation job was run. For example, the following file records data for an automation job executed on 8 March 2023 at 2.30AM.

$ ls

2023_03_08_02_30_24__aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa_job_data.tar.gz

To view the files in the tarball, run tar xvf.

$ tar xvf 2023_03_08_02_30_24__aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa_job_data.tar.gz

x jobs.csv
x modules.csv
x collections.csv
x roles.csv
x playbook_on_stats.csv

The folowing example shows the jobs.csv file.

$ cat jobs.csv

job_id,job_type,started,finished,job_state,hosts_ok,hosts_changed,hosts_skipped,hosts_failed,hosts_unreachable,task_count,task_duration
84896567-a586-4215-a914-7503010ef281,local,2023-03-08 02:30:22.440045,2023-03-08 02:30:24.316458,,5,0,0,0,0,2,0:00:01.876413

When a parameter value is not available, the corresponding entry in the CSV file is empty. In the jobs.csv file above, the job_state value is not available.

Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.