Este contenido no está disponible en el idioma seleccionado.

Chapter 1. Terraform integration


1.1. About the Terraform integration

Learn about the supported integrations between IBM HashiCorp products and Red Hat Ansible Automation Platform, the integration workflows, and migration paths to help determine the best options for your environment.

1.1.1. Introduction

Many organizations find themselves using both Ansible Automation Platform and Terraform Enterprise or HCP Terraform, recognizing that these can work in harmony to create an improved experience for developers and operations teams.

While Terraform Enterprise and HCP Terraform excel at Infrastructure as Code (IaC) for provisioning and de-provisioning cloud resources, Ansible Automation Platform is a versatile, all-purpose automation solution ideal for configuration management, application deployment, and orchestrating complex IT workflows across diverse domains.

This integration directly addresses common challenges such as managing disparate automation tools, ensuring consistent configuration across hybrid cloud environments and accelerating deployment cycles. By bringing together Terraform’s declarative approach to infrastructure provisioning with Ansible Automation Platform’s procedural approach to configuration and orchestration, users can achieve:

  • Optimized costs: Reduce cloud waste, minimize manual processes, and combat tool sprawl. This integration can lead to a significant reduction in infrastructure costs and a high return on investment.
  • Reduced risk: Lower the risk of breaches, enforce policies, and significantly decrease unplanned downtime. The ability to review Terraform plan output before applying it in a workflow, with approval steps, enhances security and compliance.
  • Faster time to value: Boost developer productivity and deploy new compute resources more rapidly, leading to a faster time to market. This is achieved through unified lifecycle management and automation for Day 0 (provisioning), Day 1 (configuration), and Day 2 (ongoing management) operations.

By enabling direct calls between Ansible Automation Platform and Terraform Enterprise or HCP Terraform, organizations can unlock time to value by creating combined workflows, reduce risk through enhanced product integrations, and enhance Infrastructure-as-Code with Ansible Automation Platform content and practices. This allows for unified lifecycle management, enabling tasks from initial provisioning and configuration to ongoing health checks, incident response, patching, and infrastructure optimization.

1.1.2. Integration workflows

Depending on your existing setup, you can integrate these products from Ansible Automation Platform or from Terraform. Migration paths are provided for community users and for migrating from the cloud.terraform collection to hashicorp.terraform.

1.1.2.1. Ansible-initiated workflow

Ansible automation hub collections allow Ansible Automation Platform users to leverage the Terraform Enterprise or HCP Terraform provisioning capabilities.

hashicorp.terraform collection

This collection provides API integration between Ansible Automation Platform and Terraform Enterprise or HCP Terraform. This solution works natively with Ansible Automation Platform and reduces setup complexity because it doesn’t require a binary installation and it includes a default execution environment.

cloud.terraform collection

This collection provides CLI integration between Ansible Automation Platform and Terraform Enterprise or HCP Terraform. To use this collection, you must install a binary and create an execution environment.

Although this collection is supported, we recommend using the hashicorp.terraform collection instead to take advantage of its API capabilities.

1.1.2.2. Migration workflows

Community edition users can migrate to Terraform Enterprise or HCP Terraform, and then integrate the Ansible Automation Platform capabilities using the cloud.terraform (CLI) collection. However, we recommend using the hashicorp.terraform (API) collection instead.

If you are already using the cloud.terraform collection, you can migrate to hashicorp.terraform.

1.1.2.3. Terraform-initiated workflow

For existing Terraform Enterprise or HCP Terraform users, Terraform can directly call Ansible Automation Platform at the end of provisioning for a more seamless and secure workflow. This enables Terraform Enterprise or HCP Terraform users to enhance their immutable infrastructure automation with Ansible Automation Platform Day 2 automation capabilities and manage infrastructure updates and lifecycle events.

1.2. Integrating from Ansible Automation Platform

As an administrator, you configure the integration from Ansible Automation Platform user interface. Use the procedures related to the collection you have installed.

1.2.1. Authenticating to hashicorp.terraform

After installing or migrating to hashicorp.terraform, users must create credentials to use with job templates in Ansible Automation Platform.

1.2.1.1. Creating a credential

Users must create a credential to use with job templates in Ansible Automation Platform.

Prerequisite

Procedure

  1. Log in to Ansible Automation Platform.
  2. From the navigation panel, select Automation Execution Infrastructure Credentials, and then select Create credential.
  3. From the Credential type list, select the HCP Terraform credential type.
  4. In the Token field, enter the Terraform API token.
  5. (Optional) Edit the Description field and select the TF organization from the Organization list.
  6. Click Save credential. You are ready to use the credential in a job template.

1.2.2. Integrating with cloud.terraform

When you integrate with cloud.terraform, you must create a credential, build an execution environment, and launch a job template in Ansible Automation Platform.

1.2.2.1. Creating a credential

You can set up credentials directly from the Ansible Automation Platform user interface. The credentials are provided to the execution environment and Ansible Automation Platform reads them from there. This eliminates the need to manually update each playbook.

Prerequisites

  • You must have a Terraform API token.
  • Install the certified cloud.terraform collection from automation hub. (You need an Ansible subscription to access and download collections on automation hub.)

Procedure

  1. Log in to Ansible Automation Platform.
  2. From the navigation panel, select Automation Execution Infrastructure Credential Types.
  3. Click Create credential type. The Create Credential Type page opens and displays the Details tab.
  4. For the Credential Type, enter a name.
  5. In the Input configuration field, enter the following YAML parameter and values:

    fields:
       - id: token
         type: string
         label: token
         secret: true
  6. In the Injector configuration field, enter the following configuration.

    • For Terraform Enterprise, the hostname is the location where you have deployed TFE:

      env:
        TF_TOKEN_<hostname>:  ‘{{ token }}’
    • For HCP Terraform, use:

      env:
        TF_TOKEN_app_terraform_io:   ‘{{ token }}’
  7. To save your configuration, click Create Credential Type again. The new credential type is created.
  8. To create an instance of your new credential type, select Automation Execution Infrastructure Credentials page, and select Create credential.
  9. From the Credential type, select the name of the credential type you created earlier.
  10. In the Token field, enter the Terraform API token.
  11. (Optional) Edit the Description and select the TF organization from the Organization list.
  12. Click Save credential.

You must build an execution environment using the automation controller so that Ansible Automation Platform can provide the credentials necessary for using its automation features.

Prerequisites

  • You need a pre-existing execution environment with the latest version of cloud.terraform collection before you can create it using an automation controller. You cannot use the default execution environment provided by Ansible Automation Platform because the default environment does not include the terraform CLI binary.

    Note

    If you have migrated from Terraform Community Edition, you can continue to use your existing execution environment and update it to the latest version of cloud.terraform.

  • Install the terraform CLI binary in your pre-existing execution environment. See Additional resources below for a link to the binary.

Procedure

  1. From the navigation panel, select Automation Execution Infrastructure Execution Environments.
  2. Click Create execution environment.

    Create a new execution environment page
  3. For Name, enter a name for your Ansible Automation Platform execution environment.
  4. For Image, enter the repository link to the image for your pre-existing execution environment.
  5. Click Create execution environment. Your newly added execution environment is ready to be used in a job template.

1.2.2.3. Creating and launching a job template

Create and launch a job template to complete the integration and use the automation features in Ansible Automation Platform.

Procedure

  1. From the navigation panel, select Automation Execution Templates.
  2. Select Create template > Create Job Template.
  3. From the Execution Environment list, select the environment you created.
  4. From the Credentials list, select the credentials instance you created previously. If you do not see the credentials, click Browse to see more options in the list.
  5. Enter any additional information for the required fields.
  6. Click Create job template.
  7. Click Launch template.
  8. To launch the job, click Next and Finish. The job output shows that the job has run.

Verification

To see that the job has run successfully from the Terraform user interface, select Workspaces > Ansible-Content-Integration > Run. The Run list shows the state of the Triggered via CLI job. You can see it go from the Queued to the Plan Finished state.

1.2.3. Ansible-initiated workflows

After you set up authentication with Ansible Automation Platform, there are many possible Ansible-initiated workflows and many patterns that you can apply.

Some workflows to consider include:

  • Performing traditional infrastructure set up. You first configure Ansible Automation Platform to do a task that Terraform cannot manage. Then perform terraform apply. For example, configure Ansible Automation Platform to set up the state backend for an initial run. Or use Ansible Automation Platform to set up initial cloud credentials or users to interact with a cloud provider’s API.
  • Modifying infrastructure with Terraform. In this case, turn off Ansible monitoring for the infrastructure that you are modifying. Then perform terraform apply with your changes. Finally, turn monitoring back on.
  • Automating terraform apply based on an event. For example, you might want to trigger an event when a ServiceNow ticket is opened or a service catalog order is placed. Set up a webhook with in the Ansible Automation Platform UI so that Terraform is able to receive the event.

1.3. Migrating from other versions

Migrate from earlier collections or community editions to use the most advanced features of the HashiCorp and Ansible Automation Platform integrations.

1.3.1. Migrating from cloud.terraform to hashicorp.terraform

If you are using the existing cloud.terraform (CLI-based) collection, you can migrate your existing playbooks to the hashicorp.terraform (API-based) collection. The main modules for hashicorp.terraform that you must configure are hashicorp.terraform.configuration_version and hashicorp.terraform.run.

To migrate to the hashicorp.terraform collection, you must configure the hashicorp.terraform.configuration_version module. This module manages configuration versions in Terraform Enterprise or HCP Terraform.

Prerequisites

  • Install the Ansible Automation Platform certified hashicorp.terraform collection.
  • Verify that a valid organization and workspace are correctly set up in Terraform Enterprise or HCP Terraform.

Procedure

  1. Replicate your automation tasks from the cloud.terraform modules.

    Example

    - name: Create configuration version with auto_queue_runs to false
      hashicorp.terraform.configuration_version:
       workspace_id: ws-1234
       configuration_files_path: "/usr/home/tf"
       auto_queue_runs: false
       tf_validate_certs: true
       poll_interval: 3
       poll_timeout: 15
       state: present
  2. Configure the following required parameters:

    • workspace_id or workspace + organization: The workspace ID or the workspace name and organization where the configuration version will be created and the file will be uploaded (for state: present).
    • configuration_files_path: The path where the required Terraform Enterprise or HCP Terraform files will be uploaded to create a configuration version (for state: present). The module accepts two file types for configuration_files_path:

      • Directory: Any folder containing Terraform Enterprise or HCP Terraform files. The module auto-creates the .tar.gz file from all contents recursively.
      • .tar.gz Archive: Pre-compressed gzip tarball. The module validates TAR format and gzip compression.
    • configuration_version_id: The configuration version ID that will be archived (state: archived). This action deletes the associated uploaded .tar.gz file. Note the following:

      • Only uploaded versions that were created using the API or CLI, have no active runs, and are not the current version for any workspace can be archived.
      • When the configuration_version_id is unspecified, Terraform Enterprise or HCP Terraform selects the latest approved configuration_version_id in the workspace.
    • auto_queue_runs: Determines if Terraform Enterprise or HCP Terraform automatically starts a run after the configuration upload (true by default) or requires manual initiation (false).
  3. Set additional optional parameters as needed.

1.3.1.2. Configuring the hashicorp.terraform.run module

The hashicorp.terraform.run module lets you manage Terraform Enterprise or HCP Terraform runs using create, apply, cancel, and discard operations. You can trigger plans or apply operations on specified workspaces with customizable settings.

Prerequisites

  • Ensure that a valid Terraform API token is properly configured to authenticate with your Terraform Enterprise or HCP Terraform environment.
  • Verify that a valid organization and workspace are correctly set up in Terraform Enterprise or HCP Terraform.

Procedure

  1. Create a run module.

    Example

    - name: Create a destroy run with auto_apply
      hashicorp.terraform.run:
      workspace_id: ws-1234
      run_message: "destroy vpc"
      state: "present"
      tf_token: <your token>
      is_destroy: true
      auto_apply: true
      target_addrs:
        - "aws_vpc.vpc1"
        - "aws_vpc.vpc2"
      poll: true
      poll_interval: 10
      poll_timeout: 30
  2. Configure the following required parameters:

    • workspace_id or workspace + organization: The workspace ID or the workspace name and organization where the configuration version will be created and the file will be uploaded (for state: present).
    • run_id: The unique identifier of the run to apply, cancel, or discard operations.
    • tf_token: The Terraform API authentication token. If this value is not set, the TF_TOKEN environment variable is used.
  3. (Optional) Configure the built-in polling options that determine the wait period for Terraform Enterprise or HCP Terraform operations to complete:

    • poll: true: (Default) Checks the run status every poll_interval seconds (default: 5s) until completion or poll_timeout (default: 25s) is reached, returning the final status.
    • poll: false: Returns immediately after initiating the run without waiting.
  4. Set additional optional parameters as needed.

1.3.1.3. Migration examples for hashicorp.terraform modules

These before and after examples help users understand how the modules can be configured in a real world environment.

1.3.1.3.1. Example 1: Plan Only
  • Before (cloud.terraform.terraform):
- name: Create a plan file using check mode
  cloud.terraform.terraform:
   force_init: true
   project_path: "/usr/home/tf"
   plan_file: "/usr/home/tf/terraform.tfplan"
   state: present
   check_mode: true
   check_destroy: true
   variables:
     environment: prod
  • After (hashicorp.terraform.*):

    • The configuration_version module:

      - name: Create configuration version with auto_queue_runs to false
        hashicorp.terraform.configuration_version:
         workspace_id: ws-1234
         configuration_files_path: "usr/home/tf_files"
         auto_queue_runs: false
         tf_validate_certs: true
         poll_interval: 5
         poll_timeout: 10
         state: present
    • The plan_only run with the run module:

      - name: Create a plan only run with variables
        hashicorp.terraform.run:
         workspace_id: ws-1234
         run_message: "plan-only vpc creation"
         poll: false
         state: "present"
         tf_token: "{{ tfc_token }}"
         plan_only: true
         variables:
          - key: "env"
            value: "production"
1.3.1.3.2. Example 2: Plan and apply
  • Before (cloud.terraform.terraform):

    1. Generate the plan:

      - name: Plan and Apply Workflow - Step 1 - Generate Plan
        cloud.terraform.terraform:
         force_init: true
         project_path: "/usr/home/tf"
         plan_file: "/usr/home/tf/workflow.tfplan"
         state: present
         check_mode: true
         variables:
           environment: prod
    2. Apply the plan:

      - name: Plan and Apply Workflow - Step 2 - Apply Plan
        cloud.terraform.terraform:
         project_path:  "/usr/home/tf"
         plan_file: "/usr/home/tf/workflow.tfplan"
         state: present
  • After (hashicorp.terraform.run):

    1. The configuration_version module:

      - name: Create configuration version with auto_queue_runs to false
        hashicorp.terraform.configuration_version:
         workspace_id: ws-1234
         configuration_files_path: "usr/home/tf_files"
         auto_queue_runs: false
         tf_validate_certs: true
         poll_interval: 5
         poll_timeout: 10
         state: present
    2. The run module with two options for plan and apply workflow:
  • Option 1: Uses the auto_apply parameter to handle both the plan and apply workflows:

    - name: Create a run with auto_apply
      hashicorp.terraform.run:
       workspace_id: ws-1234
       run_message: "destroy vpc"
       state: "present"
       tf_token: "{{ tfc_token }}"
       auto_apply: true
       poll: true
       poll_interval: 10
       poll_timeout: 30
  • Option 2: Uses two sub-steps to create a save_plan run and then apply it:

    1. Create the plan:

      - name: Create a save plan run
        hashicorp.terraform.run:
         workspace_id: ws-1234
         run_message: "save plan vpc creation"
         state: "present"
         tf_token: "{{ tfc_token }}"
         poll: true
         poll_interval: 10
         poll_timeout: 30
         save_plan: true
    2. Apply the plan. You get the run_id from the output of the run module task:

      - name: Apply the save plan run
        hashicorp.terraform.run:
         run_id: run-1234
         state: "applied"
         tf_token: "{{ tfc_token }}"
         poll: true
         poll_interval: 10
         poll_timeout: 30

1.3.2. Migrating from Terraform Community Edition

If you want to use Ansible Automation Platform with Terraform Enterprise (TFE) or HCP Terraform and you are currently using Terraform Community Edition (TCE), you must migrate to TFE or HCP Terraform and then update Ansible Automation Platform configurations to work with TFE or HCP Terraform.

1.3.2.1. Migrating from the community edition

When you migrate from TCE to TFE or HCP Terraform, you are not migrating the collection itself. Instead, you are adapting your existing TCE usage to work with TFE or HCP Terraform.

After you migrate, you must update the Ansible Automation Platform credentials, execution environment, and job templates.

Note

The cloud.terraform collection only supports the CLI-driven workflow in HCP Terraform.

Prerequisites

  • Use the latest supported version of Terraform (1.11 or higher).
  • Follow the tf-migrate CLI instructions under Additional resources below.
  • Ensure that the HCP Terraform or TFE workspace is not set to automatically apply plans.

Procedure

  1. To prevent errors when running playbooks against TFE or HCP Terraform, do the following actions before running a playbook:

    1. Confirm that the Terraform version in the execution environment is the same as the version stated in TFE or HCP Terraform.
    2. Perform an initialization in TFE or HCP Terraform:

      terraform init
    3. If you have a local state file in your execution environment, delete the local state file.
    4. Get a token from HCP Terraform or Terraform Enterprise, which you will use to create the credential in a later step. Ensure the token has the necessary permissions based on the team or user token to execute the desired capabilities in the playbook.
    5. Remove the backend config and files from your playbook definition.
    6. Add the workspace within the default setting in your TF config or an environment variable if you want to define the workspace outside updating the playbook itself.

      Note

      You can add the workspace to your playbook to scale your workspace utilization.

  2. From the Ansible Automation Platform user interface:

  3. (Optional) After the migration is completed and verified, you can run the additional modules and plugins from the collection in your execution environment:

1.4. Integrating from Terraform

If you have already provisioned your environment from Terraform Enterprise, you can use the Terraform official provider to leverage Ansible Automation Platform automation capabilities.

1.4.1. Configuring the provider

You must configure the provider to allow Terraform to reference and manage a subset of Ansible Automation Platform resources.

The provider configuration belongs in the root module of a Terraform configuration. Child modules receive their provider configurations from the root module.

Prerequisites

  • You have installed and configured Terraform Enterprise or HCP Terraform.
  • You have installed the latest release version of terraform-provider-aap from the Terraform registry.

    Note

    The default latest version on the Terraform registry might be a pre-release version (such as 1.2.3-beta). Select a supported release version, which uses a 1.2.3 format without dashes.

  • You have created a username and password or an API token for Ansible Automation Platform. Environment variables are also supported.

    Note

    Token authentication is recommended because users can manage tokens for specific integrations (such as Terraform), limit token access, and have full control over token lifecycle.

Procedure

  1. Create a Terraform configuration (.tf) file. Include a provider block. The name given in the block header is the local name of the provider to configure. This provider should already be included in a required_providers block.

    Example

    # This example creates an inventory named `My new inventory`
    # and adds a host `tf_host` and a group `tf_group` to it,
    # and then launches a job based on the "Demo Job Template"
    # in the "Default" organization using the inventory created.
    #
    terraform {
      required_providers {
        aap = {
          source = "ansible/aap"
        }
      }
    }
    
    provider "aap" {
      host     = "https://AAP_HOST"
      token = "my-aap-token" # Do not record credentials directly in the Terraform configuration. Provide your token using the AAP_TOKEN environment variable.
    }
    
    resource "aap_inventory" "my_inventory" {
      name         = "My new inventory"
      description  = "A new inventory for testing"
      organization = 1
      variables = jsonencode(
        {
          "foo" : "bar"
        }
      )
    }
    
    resource "aap_group" "my_group" {
      inventory_id = aap_inventory.my_inventory.id
      name         = "tf_group"
      variables = jsonencode(
        {
          "foo" : "bar"
        }
      )
    }
    
    resource "aap_host" "my_host" {
      inventory_id = aap_inventory.my_inventory.id
      name         = "tf_host"
      variables = jsonencode(
        {
          "foo" : "bar"
        }
      )
      groups = [aap_group.my_group.id]
    }
    
    data "aap_job_template" "demo_job_template" {
      name              = "Demo Job Template"
      organization_name = "Default"
    }
    
      # In order for passing the inventory id to the job template execution, the Inventory on the job template needs to be set to "prompt on launch"
    resource "aap_job" "my_job" {
      inventory_id    = aap_inventory.my_inventory.id
      job_template_id = aap_job_template.demo_job_template.id
    
      # This resource creation needs to wait for the host and group to be created in the inventory
      depends_on = [
        aap_host.my_host,
        aap_group.my_group
      ]
    }
  2. Add the configuration arguments, as shown in the previous example. You must configure the host and credentials. A full list of supported schema is available on the Terraform registry for your aap provider release version.

    • host: (String) AAP server URL. Can also be configured using the AAP_HOST environment variable.
    • insecure_skip_verify: (Boolean) If true, configures the provider to skip TLS certificate verification. Can also be configured by setting the AAP_INSECURE_SKIP_VERIFY environment variable.
    • password: (String, Case Sensitive) Password to use for basic authentication. Ignored if the token is set. Note that hardcoded credentials are not recommended for security reasons. It is a best practice to use the AAP_PASSWORD environment variable instead.
    • timeout: (Number) Timeout specifies a time limit for requests made to the AAP server. Defaults to 5 if not provided. A timeout of zero means no timeout. Can also be configured by setting the AAP_TIMEOUT environment variable.
    • token: (String, Case Sensitive): Token to use for token authentication. Note that hardcoded credentials are not recommended for security reasons. It is a best practice to use the AAP_TOKEN environment variable instead.
    • username: (String) Username to use for basic authentication. Ignored if the token is set. Can also be configured by setting the AAP_USERNAME environment variable.
  3. (Optional) You can use expressions in the values of these configuration arguments, but can only reference values that are known before the configuration is applied.
  4. (Optional) You can also use an alias meta-argument that is defined by Terraform and is available for all provider blocks. alias lets you use the same provider with different configurations for different resources.

1.4.2. Inventory best practices

When you integrate the Terraform provider with Ansible Automation Platform, create inventory resources using the inventory plugin for your specific cloud provider, such as Amazon Web Services (AWS) or Microsoft Azure. Or create static inventory resources with the Terraform provider.

Do not use a Terraform state-backed inventory plugin to manage your resources as it is error prone due to technical limitations in cloud environments, such as timing errors and race conditions during resource provisioning.

Specific cloud provider inventory plugins are designed to handle these cases, allowing Ansible Automation Platform to accurately discover and target newly created instances. This results in a more stable and reliable integration between your provisioning and configuration management workflows.

To ensure a reliable workflow, follow these steps to set up your inventory:

  1. Provision your cloud infrastructure by using the Terraform Provider collection.
  2. Configure a dynamic inventory in Ansible Automation Platform using the relevant cloud provider plugin.
  3. Set the inventory refresh frequency to account for the time required for new instances to become reachable after creation.

1.4.3. Dynamically override execution settings

As an administrator using Terraform provider, you can configure optional Prompt on Launch parameters in Ansible job and workflow templates to dynamically override default execution settings at runtime.

Essentially, this means that you can use one Ansible template with different Terraform *.tf files to deploy many environments. Terraform provides the values when the Ansible job or workflow runs.

The Ansible playbooks and templates stored in Ansible Automation Platform remain reusable and independent of the specific Terraform configuration that provisioned them.

You must do the following steps to use Prompt on Launch:

  • Ansible UI: Select the Prompt on Launch checkbox for any of the supported fields in the Ansible job or workflow template.
  • Terraform: Set the values for the corresponding fields in the *.tf file that will launch jobs from that template. If the corresponding value is not set in the *.tf file, then the run fails and Ansible Automation Platform sends an error message.

The supported Prompt on Launch settings include:

  • Inventory: Allows Terraform to specify the inventory that will be used by the Ansible job template.
  • Extra variables: To use extra variables to pass data or trigger actionable workflows, provide either a JSON or YAML string.

1.4.4. Using TF Actions and Ansible Automation Platform

Use Terraform (TF) Actions with Ansible Automation Platform to trigger automated configuration after your infrastructure is provisioned.

1.4.4.1. About Terraform Actions and Ansible Automation Platform

The Terraform (TF) Actions adds an imperative action block to the HCL language, letting you execute steps after infrastructure is provisioned without leaving the declarative Terraform workflow. This keeps the entire infrastructure and configuration process visible in your Terraform configuration.

TF Actions can be used to trigger Ansible automation for configuration management, such as sending an event and payload to Ansible Automation Platform to configure newly provisioned virtual machines.

There are two actions implemented with the Terraform provider for Ansible Automation Platform:

  • Launch a job directly: Runs the job as a direct, immediate execution request to Ansible Automation Platform. You must explicitly define which specific Ansible Automation Platform job template the TF Action should call.
  • Use Event-Driven Ansible: Sends an event to Ansible Automation Platform, which then uses rulebooks to intelligently decide which playbook to run based on the event’s payload. This allows for more dynamic, scalable and reactive automation.

1.4.4.2. Using TF Actions as a direct job

When you use TF Actions to launch jobs directly with Ansible Automation Platform, the process is streamlined and sequential.

The benefit of this approach is a clean, predictable state: the Ansible job launches during the Terraform apply cycle, and Terraform receives a clear, binary status. Note that each change launches a separate job with identical configuration.

This method can be useful when you want to execute Ansible automation against newly provisioned servers. For example, last mile provisioning or applying a routine security patching job on a new host.

Prerequisites

  • You have configured the AAP Terraform provider to authenticate with Ansible Automation Platform.
  • You have configured the AWS Terraform provider to authenticate with Amazon Web Services.

    Note

    The example below uses Amazon Web Services (AWS) and requires an AWS account that might incur charges. You can adapt the pattern to use a different cloud provider.

  • You have job templates configured with:

    • Inventory set to prompt on launch.
    • A machine credential (private key) matching a public key available in a local file.

Procedure

  1. Define the aap_job_launch action in your *.tf file.
  2. Add a lifecycle job block to define which action will be invoked during the proper lifecycle event trigger.

    Example

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 6.0"
    }

    aap = {
      source  = "ansible/aap"
      version = "~> 1.4.0"
    }
  }
}

provider "aap" {
  # Configure authentication as needed.
}

provider "aws" {
  region = "us-west-1"
  # Configure authentication as needed.
}

variable "public_key_path" {
  type        = string
  description = "Local path to a public key file to inject into the VM. Your AAP Job Template must have the matching private key configured as a machine credential."
}

resource "aws_key_pair" "key_pair" {
  key_name   = "aap-terraform-actions-demo-key"
  public_key = file(var.public_key_path)
}

data "aws_ami" "rhel_ami" {
  most_recent = true

  filter {
    name   = "name"
    values = ["RHEL-9*"]
  }

  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }

  owners = ["309956199498"] # Red Hat
}

resource "aws_instance" "instance" {
  ami                         = data.aws_ami.rhel_ami.id
  instance_type               = "t2.micro"
  associate_public_ip_address = true
  key_name                    = aws_key_pair.key_pair.key_name
}

# Look up Organization ID

data "aap_organization" "organization" {
  name = "Default"
}

# Create an inventory

resource "aap_inventory" "inventory" {
  name         = "Actions Demo Inventory"
  organization = data.aap_organization.organization.id
}

data "aap_job_template" "job_template" {
  name              = "Demo Job Template"
  organization_name = data.aap_organization.organization.name
}

#
# Direct job launch action example
#

resource "aap_host" "host" {
  inventory_id = aap_inventory.inventory.id
  name         = resource.aws_instance.instance.public_ip
  # Setting a value of 10 for SSH retries because terraform will mark the
  # instance 'created' before it is ready to accept connections from Ansible.
  variables = jsonencode(
    {
      "ansible_ssh_retries" : 10
    }
  )
  # Configure a job launch after the host is created in inventory
  lifecycle {
    action_trigger {
      events  = [after_create]
      actions = [action.aap_job_launch.job_launch]
    }
  }
}

action "aap_job_launch" "job_launch" {
  config {
    inventory_id        = aap_inventory.inventory.id
    job_template_id     = data.aap_job_template.job_template.id
    wait_for_completion = true
  }
}
  1. (Required) Change the job template name and the inventory name in this example to your corresponding variables.
  2. (Optional) You can set owners to the Red Hat RHEL image ID so that the latest image is used each time the job runs.
  3. (Optional) Set additional parameters as needed. For example, you can set wait_for_completion to true, then Terraform will wait until this job is created and reaches any final state before continuing. You can also set wait_for_completion_timeout_seconds to control the timeout limit.
  4. Update and commit the Terraform code.
  5. Execute the Terraform plan and apply it.

1.4.4.3. Using TF Actions with Event-Driven Ansible

Event-Driven Ansible is an automation feature that allows Ansible Automation Platform to react to real-time events, instead of being triggered on a schedule or by a manual request.

1.4.4.3.1. Configuring an event stream

To use TF Actions with Event-Driven Ansible, you must first configure the event stream in Ansible Automation Platform. TF actions will post events to this stream.

Prerequisites

  • You have configured the AAP Terraform provider to authenticate with Ansible Automation Platform.
  • You have configured the AWS Terraform provider to authenticate with Amazon Web Services.

    Note

    The example below uses Amazon Web Services (AWS) and requires an AWS account that might incur charges. You can adapt the pattern to use a different cloud provider.

You have an Ansible Automation Platform inventory named EDA Actions Demo Inventory in the Default organization.

You have job templates configured with:

  • Inventory set to EDA Actions Demo Inventory.
  • A machine credential (private key) matching a public key available in a local file.

Procedure

  1. Log in to the Ansible Automation Platform user interface.
  2. Navigate to Automation Decisions Event Streams.
  3. Click Create event stream.
  4. On the Create event stream page, edit the fields:

    • Name: A descriptive, unique name for your event stream (such as Terraform provider_Events).
    • Organization: Select the organization this event stream will belong to (usually Default or your specific organization).
    • Event stream type: Select the type that matches how you want to receive events. Basic Event Stream (username/password) is supported with this integration.
    • Credential: Select a credential that you have pre-created for authentication with this event stream.
    • Headers: (Optional) Enter comma-separated HTTP header keys that you want to include in the event payload that gets forwarded to the rulebook. Leave this empty to include all headers.
    • Forward events to rulebook activation: This option is typically enabled by default. Disabling it is useful for testing and diagnosing your connection and incoming data without inadvertently triggering any automation.
  5. Click Create event stream. Then navigate to Automation Decisions Event Streams to verify the event stream was created and see the number of events received so far.

    You can also click on the specific stream to see its detailed configuration, including the organization, event stream type, associated credential, and event forwarding settings.

  6. Set up a rule book activation. Make sure to:

    1. Add the event stream to the rulebook.
    2. (Recommended) Select the Rulebook activation enabled? option to automatically start the activation after creation.
    3. Activate the rulebook.
  7. Select Automation Decisions Rulebook Activations to verify that the rulebook is active and check its status.
1.4.4.3.2. Configuring TF Actions

To connect the event stream to Terraform actions, you configure the main TF file (*.tf) in Terraform.

Procedure

  1. Add a lifecycle block to call the Event-Driven Ansible event stream to your *.tf file. The after_create action will trigger the action.aap_eda_eventstream_post.create.

    Example

    The following example shows a lifecycle block added to the provisioning of an AWS EC2 server. After the new server is provisioned, the action runs.

    terraform {
      required_providers {
        aws = {
          source  = "hashicorp/aws"
          version = "~> 6.0"
        }
    
        aap = {
          source  = "ansible/aap"
          version = "~> 1.4.0"
        }
      }
    }
    
    provider "aap" {
      # Configure authentication as needed.
    }
    
    provider "aws" {
      region = "us-west-1"
      # Configure authentication as needed.
    }
    
    variable "public_key_path" {
      type        = string
      description = "Local path to a public key file to inject into the VM. Your AAP Job Template must have the matching private key configured as a machine credential."
    }
    
    variable "event_stream_username" {
      type = string
    }
    
    variable "event_stream_password" {
      type = string
    }
    
    resource "aws_key_pair" "key_pair" {
      key_name   = "aap-terraform-actions-demo-key"
      public_key = file(var.public_key_path)
    }
    
    data "aws_ami" "rhel_ami" {
      most_recent = true
    
      filter {
        name   = "name"
        values = ["RHEL-9*"]
      }
    
      filter {
        name   = "virtualization-type"
        values = ["hvm"]
      }
    
      owners = ["309956199498"] # Red Hat
    }
    
    resource "aws_instance" "instance" {
      ami                         = data.aws_ami.rhel_ami.id
      instance_type               = "t2.micro"
      associate_public_ip_address = true
      key_name                    = aws_key_pair.key_pair.key_name
    }
    
    # Look up an inventory
    
    data "aap_inventory" "inventory" {
      name              = "EDA Actions Demo Inventory"
      organization_name = "Default"
    }
    
    #
    # EDA Event launch action example
    #
    
    resource "aap_host" "host" {
      inventory_id = data.aap_inventory.inventory.id
      name         = resource.aws_instance.instance.public_ip
      # Configure an EDA eventstream POST after the host is created in inventory
      lifecycle {
        action_trigger {
          events  = [after_create]
          actions = [action.aap_eda_eventstream_post.event_post]
        }
      }
    }
    
    data "aap_eda_eventstream" "eventstream" {
      name = "TF Actions Event Stream"
    }
    
    action "aap_eda_eventstream_post" "event_post" {
      config {
        limit             = "all"
        template_type     = "job"
        job_template_name = "Demo Job Template"
        organization_name = "Default"
        event_stream_config = {
          username = var.event_stream_username
          password = var.event_stream_password
          url      = data.aap_eda_eventstream.eventstream.url
        }
      }
    }
  2. (Required) Configure the following parameters:

    • event_stream_config: (Attributes) Details for the Event-Driven Ansible event stream. You must include:

      • username: (String) Username to use when performing the POST to the event stream URL
      • password: (String) Password to use when performing the POST to the event stream URL
      • url: (String) URL to receive the event POST
    • limit: (String) Ansible Automation Platform limit for job execution
    • organization_name: (String) Organization name
    • template_type: (String) Template type: either job or workflow_job
  3. (Optional) You can set owners to the Red Hat RHEL image ID so that the latest image is used each time the job runs.
  4. (Optional) Set additional parameters as needed.
  5. Configure an action integration with event payload specifications and target rulebook mapping.

    Example:

- name: Dispatch TF Workflow Job Template Action
      condition: event.payload.template_type == "workflow"
      throttle:
        once_after: 1 minute
        group_by_attributes:
          - event.payload.workflow_template_name
          - event.payload.limit
          - event.payload.organization_name
      actions:
        - debug:
            msg: "Executing Workflow Template {{ event.payload.workflow_template_name }}"
        - run_workflow_template:
            name: "{{ event.payload.workflow_template_name }}"
            organization: "{{ event.payload.organization_name }}"
        - debug:
            msg: "Executed Workflow Job Template {{ event.payload.workflow_template_name }}"
1.4.4.3.3. Creating and applying the plan

After you configure your Terraform plan to include Event-Driven Ansible events, you create and apply the plan to trigger the events.

Procedure

  1. Run terraform init to initialize your working directory.
  2. Use terraform plan to commit to create the plan. The following example also saves the plan to a file named tfplan.out, but you can specify any name for the plan. Saving the plan is a best practice for automation because the saved plan is strictly enforced.

    terraform plan -out=tfplan.out
  3. Review the plan output.
  4. Apply the saved plan.

    terraform apply tfplan.out

    This creates and sends an event to the specified event stream. As each resource is created, TF actions are invoked and the corresponding Ansible Automation Platform playbooks are executed sequentially.

Verification

  1. Verify that the runs are updated in the Terraform user interface. Drill down on a resource to see that the action was invoked and a post event was executed.
  2. From the Ansible Automation Platform user interface, verify that the event is successfully received by (EDAName} and triggers the appropriate rulebook activation:

    1. Check the Event Streams dashboard to see the TF Actions events were received.
    2. Check the Jobs dashboard to see the jobs running sequentially and with a Success status.
    3. Check the Inventory dashboard to see the updates. For example, if you created new servers, check the Hosts tab for the Terraform provisioned inventory.
1.4.4.3.4. Example rulebook

The following rulebook example shows how to use TF actions and Event-Driven Ansible to listen for events on a webhook.

- name: Listen for events on a webhook
  hosts: all

  ## Define our source for events

  sources:
    - ansible.eda.webhook:
        host: 0.0.0.0
        port: 5000
      filters:
        - ansible.eda.insert_hosts_to_meta:
            host_path: payload.limit

  ## Define the conditions we are looking for

  rules:
    - name: Dispatch TF Job Template Action
      condition: event.payload.template_type == "job"
      throttle:
        once_after: 1 minute
        group_by_attributes:
          - event.payload.job_template_name
          - event.payload.limit
          - event.payload.organization_name
      actions:
        - debug:
            msg: "Executing Job Template {{ event.payload.job_template_name }}"
        - run_job_template:
            name: "{{ event.payload.job_template_name }}"
            organization: "{{ event.payload.organization_name }}"
        - debug:
            msg: "Executed Job Template {{ event.payload.job_template_name }}"
    - name: Dispatch TF Workflow Job Template Action
      condition: event.payload.template_type == "workflow"
      throttle:
        once_after: 1 minute
        group_by_attributes:
          - event.payload.workflow_template_name
          - event.payload.limit
          - event.payload.organization_name
      actions:
        - debug:
            msg: "Executing Workflow Template {{ event.payload.workflow_template_name }}"
        - run_workflow_template:
            name: "{{ event.payload.workflow_template_name }}"
            organization: "{{ event.payload.organization_name }}"
        - debug:
            msg: "Executed Workflow Job Template {{ event.payload.workflow_template_name }}"
Red Hat logoGithubredditYoutubeTwitter

Aprender

Pruebe, compre y venda

Comunidades

Acerca de la documentación de Red Hat

Ayudamos a los usuarios de Red Hat a innovar y alcanzar sus objetivos con nuestros productos y servicios con contenido en el que pueden confiar. Explore nuestras recientes actualizaciones.

Hacer que el código abierto sea más inclusivo

Red Hat se compromete a reemplazar el lenguaje problemático en nuestro código, documentación y propiedades web. Para más detalles, consulte el Blog de Red Hat.

Acerca de Red Hat

Ofrecemos soluciones reforzadas que facilitan a las empresas trabajar en plataformas y entornos, desde el centro de datos central hasta el perímetro de la red.

Theme

© 2026 Red Hat
Volver arriba