Deploy the Ansible Lightspeed intelligent assistant

This section provides information about the procedures involved in deploying the Ansible Lightspeed intelligent assistant on OpenShift Container Platform.

Create a chatbot configuration secret

Create a configuration secret for the Ansible Lightspeed intelligent assistant, so that you can connect the intelligent assistant to the Ansible Automation Platform operator.

Before you begin

Procedure

  1. Log in to Red Hat OpenShift Container Platform as an administrator.
  2. Navigate to Workloads > Secrets.
  3. From the Projects list, select the namespace that you created when you installed the Ansible Automation Platform operator.
  4. Click Create > Key/value secret.
  5. In the Secret name field, enter a unique name for the secret. For example, chatbot-configuration-secret.
  6. Add the following keys and their associated values individually:
    Expand
    Key Value

    Settings for all LLM setups

    chatbot_model

    Enter the LLM model name that is configured on your LLM setup.

    chatbot_url

    Enter the inference API base URL on your LLM setup. For example, https://your_inference_api/v1. If you are using Microsoft Azure OpenAI, then set the base URL to https://your_inference_api/openai/v1.

    chatbot_token

    Enter the API token or the API key. This token is sent along with the authorization header when an inference API is called.

    chatbot_llm_provider_type

    Optional

    Enter the value as per the provider type of your LLM setup:

    • Red Hat Enterprise Linux AI: rhelai_vllm
    • Red Hat OpenShift AI: rhoai_vllm
    • OpenAI: openai
    • Microsoft Azure OpenAI: azure_openai

    chatbot_model_config_extras

    Optional

    Use this field to pass a JSON dictionary of extra parameters to pass directly to the model provider, for settings not covered by other standard fields.

    For example, you can specify a parameter api_version for Microsoft Azure OpenAI in the JSON format '{"api_version": "<your API version>"}'.

    chatbot_agent_config_extras

    Optional

    Use this parameter to customize agent behavior, such as controlling the temperature of the LLM. For example, '{"chatbot_temperature_override": 1}.

    Additional settings for MCP server configuration

    • aap_gateway_url
    • aap_controller_url

    Configure a Model Context Protocol (MCP) server that interfaces with the Ansible Lightspeed intelligent assistant.

    The values aap_gateway_url and aap_controller_url are internal URLs accessible to the platform gateway and automation controller services on the OpenShift cluster. For example, if the name of your Ansible Automation Platform custom resource is myaap, these URLs will be:

    • aap_gateway_url: http://myaap
    • aap_controller_url: http://myaap-controller-service

    For MCP server configuration:

    • If none of these values are configured, no MCP server is provisioned or registered with the underlying LLM’s tool at runtime.
    • If you configure the aap_gateway_url value only, the Ansible Lightspeed Service MCP server is provisioned. Authentication attempts to use the JSON Web Token (JWT) token associated with the user’s authenticated context.
    • If you configure both values aap_gateway_url and aap_controller_url, the Ansible Lightspeed Service MCP server and Ansible Automation Platform Controller Service MCP server are both configured. Authentication attempts to use the JWT token associated with the user’s authenticated context.
  7. Click Create. The chatbot authorization secret is successfully created.

Examples of chatbot configuration secrets

The following snippet shows a few examples of secrets configuration for different LLM models.

# Example of a secret configuration for Red Hat OpenShift AI 
apiVersion: v1
kind: Secret
metadata:
  name: chatbot-configuration-secret
  namespace: aap
stringData:
  chatbot_llm_provider_type: rhoai_vllm
  chatbot_url: https://llm-dev-wisdom-model-staging.apps.stage2-west.v2dz.p1.openshiftapps.com/v1
  chatbot_model: granite-3.3-8b-instruct
  chatbot_token: <token number>

# Example of a secret configuration for OpenAI
apiVersion: v1
kind: Secret
metadata:
  name: chatbot-configuration-secret
  namespace: aap
stringData:
  chatbot_llm_provider_type: openai
  chatbot_url: https://api.openai.com/v1
  chatbot_model: gpt-4o-mini
  chatbot_token: <token number>

# Example of a secret configuration for Microsoft Azure OpenAI
apiVersion: v1
kind: Secret
metadata:
  name: chatbot-configuration-secret
  namespace: aap
stringData:
  chatbot_llm_provider_type: azure_openai
  chatbot_url: https://ols-test.openai.azure.com
  chatbot_model: gpt-4o-mini
  chatbot_token: <token number>
  chatbot_model_config_extras: '{"api_version": "2025-01-01-preview"}'

Update the YAML file of the Ansible Automation Platform operator

After you create the chatbot authorization secret, you must update the YAML file of the Ansible Automation Platform operator to use the secret.

Procedure

  1. Log in to Red Hat OpenShift Container Platform as an administrator.
  2. Navigate to Operators > Installed Operators.
  3. From the list of installed operators, select the Ansible Automation Platform operator.
  4. Locate and select the Ansible Automation Platform custom resource, and then click the required app.
  5. Select the YAML tab.
  6. Scroll the text to find the spec: section, and add the following details under the spec: section:
    spec:
      lightspeed:
        disabled: false
        chatbot_config_secret_name: <name of your chatbot configuration secret>
  7. Click Save. The Ansible Lightspeed intelligent assistant service takes a few minutes to set up.
    Note

    Upgrading from Ansible Automation Platform 2.5 to 2.6.1 enables HTTPS and enables TLS by default for internal communication between the Ansible Lightspeed API and the Ansible Lightspeed intelligent assistant pod. Following the upgrade to Ansible Automation Platform 2.6.1, the intelligent assistant will be unavailable for approximately 60 seconds while its pod restarts.

Results

  1. Verify that the chat interface service is running successfully:
    1. Navigate to Workloads > Pods.
    2. Filter with the term api and ensure that the following APIs are displayed in Running status:
      • myaap-lightspeed-api-<version number>
      • myaap-lightspeed-chatbot-api-<version number>
  2. Verify the MCP server configuration if you specified either aap_gateway_url or aap_controller_url parameter:
    • Open the lightspeed-chatbot-api pod and click the Containers section.
      • If the ansible-mcp-lightspeed container is displayed, the Ansible Lightspeed MCP server is running.
      • If the ansible-mcp-controller container is displayed, the Ansible Automation Platform Controller Service MCP server is running.
  3. Verify that the chat interface is displayed on the Ansible Automation Platform:
    1. Access the Ansible Automation Platform:
      1. Navigate to Operators > Installed Operators.
      2. From the list of installed operators, click Ansible Automation Platform.
      3. Locate and select the Ansible Automation Platform custom resource, and then click the app that you created.
      4. From the Details tab, record the information available in the following fields:
        • URL: This is the URL of your Ansible Automation Platform instance.
        • Gateway Admin User: This is the username to log into your Ansible Automation Platform instance.
        • Gateway Admin password: This is the password to log into your Ansible Automation Platform instance.
      5. Log in to the Ansible Automation Platform using the URL, username, and password that you recorded earlier.
    2. Access the Ansible Lightspeed intelligent assistant:
      1. Click the Ansible Lightspeed intelligent assistant icon Ansible Lightspeed intelligent assistant icon that is displayed at the top right corner of the taskbar.
      2. Verify that the chat interface is displayed, as shown in the following image:

        Ansible Lightspeed intelligent assistant.

Change your LLM model

If you have already deployed Ansible Lightspeed intelligent assistant but want to change your LLM model, you can create a new chatbot configuration secret for the new LLM model.

About this task

Alternatively, if you want to use the same chatbot configuration secret, you must delete and redeploy the Ansible Lightspeed intelligent assistant.

Procedure

  • To create and use a new chatbot configuration secret:
    1. Create a new chatbot configuration secret with a different name for the new LLM model.
    2. Update the YAML file of the Ansible Automation Platform operator with the new chatbot configuration secret name.

      The Ansible Automation Platform operator detects the new configuration and redeploys the Ansible Lightspeed intelligent assistant.

    3. Verify that the chat interface service is running successfully. See the verification steps mentioned in the topic Update the YAML file of the Ansible Automation Platform operator.
      Important

      Do not update the existing chatbot configuration secret with the new LLM model, as the reconciliation logic does not check the updates made to the secret.

  • To use the same chatbot secret by deleting and redeploying the Ansible Lightspeed intelligent assistant:
    1. Disable the Ansible Lightspeed operator instance:
      1. Navigate to Operators > Installed Operators.
      2. From the list of installed operators, select Ansible Automation Platform.
      3. Locate and select the Ansible Automation Platform custom resource.
      4. Select the YAML tab and under the spec: section for lightspeed category, specify disabled:true.
      5. Click Save.
    2. Delete the Ansible Lightspeed operator instance:
      1. Navigate to Operators > Installed Operators.
      2. From the list of installed operators, select Ansible Lightspeed and delete the operator.
    3. Re-enable the Ansible Automation Platform instance:
      1. Navigate to Operators > Installed Operators.
      2. From the list of installed operators, select Ansible Automation Platform.
      3. Locate and select the Ansible Automation Platform custom resource.
      4. Select the YAML tab and under the spec: section for lightspeed category, specify disabled:false.
      5. Click Save.

Use the Ansible Lightspeed intelligent assistant

After you deploy the Ansible Lightspeed intelligent assistant, all Ansible users within the organization can access and use the chat interface to ask questions and receive information about the Ansible Automation Platform.

Access the Ansible Lightspeed intelligent assistant

  1. Log in to the Ansible Automation Platform.
  2. Click the Ansible Lightspeed intelligent assistant icon Ansible Lightspeed intelligent assistant icon that is displayed at the top right corner of the taskbar.

    The Ansible Lightspeed intelligent assistant window opens with a welcome message, as shown in the following image:

    Ansible Lightspeed intelligent assistant

Use the Ansible Lightspeed intelligent assistant

You can perform the following tasks:

  • Ask questions in the prompt field and get answers about the Ansible Automation Platform Note

    If you are using an IBM Granite 3.3 series AI model, you might experience a delay of about one minute when waiting for a chat response. To resolve this error, restart the chat session.

  • View the chat history of all conversations in a chat session.
  • Search the chat history using a user prompt or answer. The chat history is deleted when you close an existing chat session or log out from the Ansible Automation Platform.
  • Restore an earlier chat by clicking the relevant entry from the chat history.
  • Give feedback on the quality of the chat answers, by clicking the Thumbs up or Thumbs down icon.
  • Copy and record the answers by clicking the Copy icon.
  • Change the mode of the virtual assistant to dark or light mode, by clicking the Sun icon Sun icon from the top right corner of the toolbar.
  • Clear the context of an existing chat by using the New chat button in the chat history.
  • Close the chat interface while working on the Ansible Automation Platform.

Deploy Red Hat Ansible Lightspeed on containerized Ansible Automation Platform

As an organization administrator, you can deploy and use Red Hat Ansible Lightspeed when you perform a new container-based installation of Ansible Automation Platform 2.6.

Overview

You can deploy and use Red Hat Ansible Lightspeed when you perform a new container-based installation of Ansible Automation Platform 2.6, or upgrade from containerized Ansible Automation Platform 2.5 to 2.6.

Red Hat Ansible Lightspeed includes two main components that enhance your automation experience with generative artificial intelligence (AI):

  • Ansible Lightspeed intelligent assistant: An AI-powered chat interface embedded within the Ansible Automation Platform.
  • Ansible Lightspeed coding assistant: A generative AI service that helps developers create Ansible content more efficiently and accurately.
Important

Red Hat does not collect any telemetry data from your interactions with Red Hat Ansible Lightspeed.

Ansible Lightspeed intelligent assistant

Ansible Lightspeed intelligent assistant is an intuitive chat interface embedded in the Ansible Automation Platform, and uses generative artificial intelligence (AI) to answer questions about the platform.

The Ansible Lightspeed intelligent assistant interacts with users in English, and uses Large Language Models (LLMs) to generate quick, accurate, and personalized responses. These responses empower Ansible users to work more efficiently, thereby improving productivity and the overall quality of their work.

To use the Ansible Lightspeed intelligent assistant, you need:

  • A valid subscription to Ansible Automation Platform.
  • Deployment of an LLM service that is hosted on one of these platforms: Red Hat Enterprise Linux AI, Red Hat OpenShift AI, or Red Hat AI Inference Server.

Integration with MCP server

Ansible Lightspeed intelligent assistant integration with the Model Context Protocol (MCP) server is available as a Technology Preview release. MCP is an open protocol that enables applications to give real-time context to LLMs.

This integration enables the Ansible Lightspeed intelligent assistant to request and receive the latest information from external resources, and give more relevant, dynamically-sourced answers when responding to your questions. To set up this integration, you need to specify the MCP server variables when configuring the Red Hat Ansible Lightspeed variables in the inventory file.

Note

Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features give early access to upcoming product features, enabling customers to test functionality and leave feedback during the development process.

Ansible Lightspeed coding assistant

The Ansible Lightspeed coding assistant is a generative AI service that works with IBM watsonx Code Assistant to help developers create and maintain Ansible content more efficiently. It can generate code recommendations for:

  • Single-task or multi-task recommendations
  • Playbooks with explanations
  • Roles with explanations

Ansible Lightspeed coding assistant generates code recommendations that adhere to Ansible best practices, while IBM watsonx Code Assistant fine-tunes models to improve the accuracy of suggested recommendations by using your organization’s existing Ansible content. This integration produces more accurate, reliable, and workflow-integrated automation code. It also shortens the onboarding time for new Ansible developers and improves team productivity.

To use the Ansible Lightspeed coding assistant, you need:

  • A valid subscription to Red Hat Ansible Automation Platform.
  • A valid subscription to IBM watsonx Code Assistant.

Deployment models

The Ansible Lightspeed coding assistant supports two deployment models. No telemetry data is collected in either configuration.

  • On-premise deployment

    Both Red Hat Ansible Lightspeed and the IBM watsonx Code Assistant model (IBM Cloud Pak for Data) are on-premise deployments.

  • Hybrid deployment

    Red Hat Ansible Lightspeed is an on-premise deployment, while IBM watsonx Code Assistant model is a cloud deployment.

    A hybrid deployment model provides the following benefits:

    • Flexibility to choose an environment that best suits your organizational needs.
    • Integrated authentication by using the Ansible Automation Platform for user authentication and removing the need for a separate Red Hat cloud login.
    • Regional choice for organizations to deploy Red Hat Ansible Lightspeed in their preferred geographical region.

Ansible Automation Platform requirements

  • Licensing requirements:
    • A valid Ansible Automation Platform subscription.
    • Administrator privileges for the Ansible Automation Platform.
  • Additional requirements for Ansible Lightspeed coding assistant:
    • A valid subscription to IBM watsonx Code Assistant (for on-premise deployment), or IBM watsonx Code Assistant for Red Hat Ansible Lightspeed on Cloud Pak for Data (for hybrid deployment).
    • An API key and a model ID from IBM watsonx Code Assistant.
    • VS Code version 1.70.1 or later.
  • Additional requirements for Ansible Lightspeed intelligent assistant:
    • Deployment of an LLM service that is hosted on one of these platforms: Red Hat Enterprise Linux AI, Red Hat OpenShift AI, or Red Hat AI Inference Server.

Large Language Model (LLM) provider requirements

You must have configured an LLM provider that you will use before deploying the Ansible Lightspeed intelligent assistant. An LLM is a type of machine learning model that can interpret and generate human-like language. When an LLM is used with the Ansible Lightspeed intelligent assistant, the LLM can interpret questions accurately and provide helpful answers in a conversational manner.

Your LLM must have tool calling enabled to handle tool-related requests. Tool calling allows the assistant to interact with platform services and execute complex workflows.

Ansible Lightspeed intelligent assistant can rely on the following LLM providers:

  • Red Hat LLM providers:
    • Red Hat Enterprise Linux AI

      You can configure Red Hat Enterprise Linux AI as the LLM provider. As the Red Hat Enterprise Linux is in a different environment than the Ansible Lightspeed deployment, the model deployment must allow access using a secure connection.

      Ansible Lightspeed intelligent assistant supports vLLM Server. When self-hosting an LLM with Red Hat Enterprise Linux AI, you can use vLLM Server as the inference engine.

    • Red Hat OpenShift AI

      You must deploy an LLM on the Red Hat OpenShift AI single-model serving platform that uses the Virtual Large Language Model (vLLM) runtime. If the model deployment lives in a different OpenShift environment than the Ansible Lightspeed deployment, include a route to expose the model deployment outside the cluster.

      Ansible Lightspeed intelligent assistant supports vLLM Server. When self-hosting an LLM with Red Hat OpenShift AI, you can use vLLM Server as the inference engine.

      Note

      For configurations with Red Hat Enterprise Linux AI or Red Hat OpenShift AI, you must host your own LLM provider instead of using a SaaS LLM provider.

    • Red Hat AI Inference Server

      You can deploy an LLM by using Red Hat AI Inference Server as your inference runtime. Red Hat AI Inference Server supports vLLM runtimes for efficient model serving and can be configured to work with Ansible Lightspeed intelligent assistant.

      If the Red Hat AI Inference Server deployment is in a different environment than the Ansible Lightspeed deployment, ensure the model deployment allows access by using a secure connection and configure appropriate network routing.

      Ansible Lightspeed intelligent assistant supports vLLM Server when self-hosting an LLM with Red Hat AI Inference Server as the inference engine.

  • Third-party LLM providers:
    • OpenAI

      To use OpenAI with the Ansible Lightspeed intelligent assistant, you need access to the OpenAI API platform.

    • Microsoft Azure OpenAI

      To use Microsoft Azure with the Ansible Lightspeed intelligent assistant, you need access to Microsoft Azure OpenAI.

Process to deploy Red Hat Ansible Lightspeed on a container-based installation

Expand
Task Description

Deploy Red Hat Ansible Lightspeed during a container-based installation of Ansible Automation Platform

An Ansible Automation Platform administrator who wants to deploy Red Hat Ansible Lightspeed for all Ansible users in the organization.

Perform the following tasks:

  1. Configure the Red Hat Ansible Lightspeed variables in the inventory file.
  2. Install or upgrade to containerized Ansible Automation Platform 2.6.
  3. If you want to install the Ansible Lightspeed coding assistant, configure the Ansible VS Code extension.
  4. Optional: Change your LLM model if you want to use a different LLM provider after deploying Red Hat Ansible Lightspeed.

Access and use the Ansible Lightspeed intelligent assistant

All Ansible users within the organization who want to use the Ansible Lightspeed intelligent assistant to get answers to their questions about the Ansible Automation Platform.

Access and use the Ansible Lightspeed coding assistant

All Ansible users within the organization who want to use the coding assistant to develop Ansible content:

  • Single task or multitask recoomendations
  • Create playbooks and view playbook explanations
  • Create roles and view role explanations

Configure Red Hat Ansible Lightspeed variables

To deploy Red Hat Ansible Lightspeed, configure the required installation variables in your inventory file.

Procedure
  1. Add the required installation variables to your inventory file under the [all: vars] group.
  2. You will also need to add specific variables to enable the Ansible Lightspeed coding assistant, the Ansible Lightspeed intelligent assistant, and the MCP server integration. Refer to the Appendix: Red Hat Ansible Lightspeed variables for information about required and optional variables.
    # This is the list of inventory file variables required to deploy Red Hat Ansible Lightspeed on a containerized installation.
    
    # Consult the docs if you are unsure what to add.
    # For information about required and optional variables, refer to the Appendix: Red Hat Ansible Lightspeed variables
    # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.6/html/containerized_installation/appendix-inventory-files-vars#lightspeed-variables
    
    # This section is for your Red Hat Ansible Lightspeed host
    # ---------------------------------------------------------
    [ansiblelightspeed]
    aap.example.com
    
    # This section is for Red Hat Ansible Lightspeed deployment
    # ----------------------------------------------------------
    lightspeed_admin_user: <set your own>
    lightspeed_admin_password: <set your own>
    lightspeed_admin_email: <set your own>
    lightspeed_pg_host: <set your own>
    lightspeed_pg_password: <set your own>
    
    
    # This section is to configure Ansible Lightspeed intelligent assistant
    # ----------------------------------------------------------------------
    lightspeed_chatbot_model_url: <set your own>
    lightspeed_chatbot_model_api_key: <set your own>
    lightspeed_chatbot_model_id: : <set your own>
    lightspeed_chatbot_default_provider: 'rhoai'
    lightspeed_chatbot_model_extra_settings: {}
    lightspeed_chatbot_agent_extra_settings: {} 
    # If you want to use Microsoft Azure OpenAI as the LLM provider, specify the lightspeed_chatbot_model_extra_settings value as '{"api_type": ""}', and the lightspeed_chatbot_model_url value to 'https://your_inference_api/openai/v1'.
    
    # This section is to configure Ansible Lightspeed intelligent assistant with MCP server integration
    # --------------------------------------------------------------------------------------------------
    lightspeed_mcp_controller_enabled: false
    lightspeed_mcp_lightspeed_enabled: false
    
    
    # This section is to configure Ansible Lightspeed coding assistant
    # -----------------------------------------------------------------
    lightspeed_wca_model_type: 'wca'
    lightspeed_wca_model_url: 'https://api.dataplatform.cloud.ibm.com'
    lightspeed_wca_model_verify_ssl: true
    lightspeed_wca_model_enable_anonymization: true
    lightspeed_wca_health_check: true
What to do next

Red Hat Ansible Lightspeed variables

Configure Red Hat Ansible Lightspeed by setting inventory file variables during installation. Use this reference to determine which variables to set for your deployment requirements.

Red Hat Ansible Lightspeed variables

Inventory file variables for Red Hat Ansible Lightspeed.

Expand
RPM variable name Container variable name Description Required or optional Default

N/A

lightspeed_admin_password

Red Hat Ansible Lightspeed administrator password. Use of special characters for this variable is limited. The password can include any printable ASCII character except /, ", or @.

Required

N/A

lightspeed_admin_user

Username used to identify and create the Red Hat Ansible Lightspeed admin user.

Optional

admin

N/A

lightspeed_chat_rate_throttle

Chat rate throttle.

Optional

10/minute

N/A

lightspeed_nginx_client_max_body_size

Maximum allowed size for data sent to Red Hat Ansible Lightspeed through NGINX.

Optional

5m

N/A

lightspeed_nginx_disable_hsts

Controls whether HTTP Strict Transport Security (HSTS) is enabled or disabled for Red Hat Ansible Lightspeed. Set this variable to true to disable HSTS.

Optional

false

N/A

lightspeed_nginx_disable_https

Controls whether HTTPS is enabled or disabled for Red Hat Ansible Lightspeed. Set this variable to true to disable HTTPS.

Optional

false

N/A

lightspeed_nginx_hsts_max_age

Maximum duration (in seconds) that HTTP Strict Transport Security (HSTS) is enforced for Red Hat Ansible Lightspeed.

Optional

63072000

N/A

lightspeed_nginx_http_port

Port number that Red Hat Ansible Lightspeed listens on for HTTP requests.

Optional

8084

N/A

lightspeed_nginx_https_port

Port number that Red Hat Ansible Lightspeed listens on for HTTPS requests.

Optional

8447

N/A

lightspeed_nginx_https_protocols

Protocols that Red Hat Ansible Lightspeed will support when handling HTTPS traffic.

Optional

[TLSv1.2, TLSv1.3]

N/A

lightspeed_nginx_user_headers

Custom Nginx headers. List of additional NGINX headers to add to Red Hat Ansible Lightspeed’s NGINX configuration.

Optional

[]

N/A

lightspeed_nginx_read_timeout

Sets the HTTP timeout for end-user requests. The minimum value is 10 seconds.

Optional

3600

N/A

lightspeed_pg_cert_auth

Controls whether client certificate authentication is enabled or disabled on the Red Hat Ansible Lightspeed PostgreSQL database. Set this variable to true to enable client certificate authentication.

Optional

false

N/A

lightspeed_pg_database

Name of the PostgreSQL database used by Red Hat Ansible Lightspeed.

Optional

lightspeed

N/A

lightspeed_pg_host

Hostname of the PostgreSQL database used by Red Hat Ansible Lightspeed.

Required

N/A

lightspeed_pg_password

Password for the Red Hat Ansible Lightspeed PostgreSQL database user. Use of special characters for this variable is limited. The !, #, 0 and @ characters are supported. Use of other special characters can cause the setup to fail.

Optional

N/A

lightspeed_pg_port

Port number for the PostgreSQL database used by Red Hat Ansible Lightspeed.

Optional

5432

N/A

lightspeed_pg_sslmode

Controls the SSL mode to use when platform gateway connects to the PostgreSQL database. Valid options include verify-full, verify-ca, require, prefer, allow, disable.

Optional

prefer

N/A

lightspeed_pg_tls_cert

Path to the PostgreSQL SSL/TLS certificate file for Red Hat Ansible Lightspeed.

Optional

N/A

lightspeed_pg_tls_key

Path to the PostgreSQL SSL/TLS key file for Red Hat Ansible Lightspeed.

Optional

N/A

lightspeed_pg_username

Username for the Red Hat Ansible Lightspeed PostgreSQL database user.

Optional

lightspeed

N/A

lightspeed_secret_key

Secret key value used by Red Hat Ansible Lightspeed to sign and encrypt data.

Optional

N/A

lightspeed_tls_cert

Path to the SSL/TLS certificate file for Red Hat Ansible Lightspeed.

Optional

N/A

lightspeed_tls_key

Path to the SSL/TLS key file for Red Hat Ansible Lightspeed.

Optional

N/A

lightspeed_tls_remote

Denote whether the Red Hat Ansible Lightspeed provided certificate files are local to the installation program (false) or on the remote component server (true).

Optional

false

N/A

lightspeed_use_archive_compression

Controls whether archive compression is enabled or disabled for Red Hat Ansible Lightspeed. You can control this functionality globally by using use_archive_compression.

Optional

true

N/A

lightspeed_use_db_compression

Controls whether database compression is enabled or disabled for Red Hat Ansible Lightspeed. You can control this functionality globally by using use_db_compression.

Optional

false

Ansible Lightspeed coding assistant variables

Inventory file variables for Ansible Lightspeed coding assistant.

Expand
RPM variable name Container variable name Description Required or optional Default

N/A

lightspeed_wca_model_type

IBM watsonx Code Assistant model deployment mode, cloud (wca) or on-premise (wca-onprem).

Optional

wca

N/A

lightspeed_wca_model_url

URL of the IBM watsonx Code Assistant model. For cloud deployment, the URL could be https://api.dataplatform.test.cloud.ibm.com.

Optional

N/A

lightspeed_wca_model_api_key

API key of the IBM watsonx Code Assistant model that was generated during the model installation.

Required

N/A

lightspeed_wca_model_id

ID of the IBM watsonx Code Assistant model.

Optional

N/A

lightspeed_wca_model_verify_ssl

Denotes whether or not to verify IBM watsonx Code Assistant’s web certificates when making calls from Red Hat Ansible Lightspeed to itself during installation. Set to false to disable web certificate verification.

Optional

true

N/A

lightspeed_wca_model_enable_anonymization

Controls whether the anonymization of Personally Identifiable Information (PII) is enabled. PII information includes passwords, IP addresses, email addresses, and other sensitive data.

When PII anonymization is enabled, users' personal information is modified to some generic values to protect their data and reduce the risk of data leaks.

You can turn off the anonymization by specifying the value as false if you want to retain all original information as entered by users and improve the quality of the answers.

If you set the value to false and the Ansible administrator is using Red Hat Ansible Lightspeed in hybrid mode (where the model is in IBM watsonx Code Assistant in IBM Cloud) then their users' PII is sent to IBM Cloud.

Optional

true

N/A

lightspeed_wca_model_username

For on-premise deployment only. The username you use to connect to an IBM Cloud Pak for Data deployment.

Optional

N/A

lightspeed_wca_health_check

Enables or disables IBM watsonx Code Assistant health check.

Optional

true

N/A

lightspeed_wca_idp_url

For cloud deployment only. The IBM watsonx Code Assistant Identity Provider (IdP) URL.

Optional

N/A

lightspeed_wca_idp_login

For cloud deployment only. The IBM watsonx Code Assistant Identity Provider (IdP) username.

Optional

N/A

lightspeed_wca_idp_password

For cloud deployment only. The IBM watsonx Code Assistant Identity Provider (IdP) password.

Optional

Ansible Lightspeed intelligent assistant variables

Inventory file variables for Ansible Lightspeed intelligent assistant.

Expand
RPM variable name Container variable name Description Required or optional Default

N/A

lightspeed_chatbot_model_url

The inference API base URL on your LLM setup. For example, https://your_inference_api/v1.If you are using Microsoft Azure OpenAI, then set the base URL to https://your_inference_api/openai/v1.

Optional

N/A

lightspeed_chatbot_model_verify_ssl

Controls whether SSL/TLS certificate verification is enabled or disabled when making HTTPS requests.

Optional

true

N/A

lightspeed_chatbot_default_provider

The provider type of your LLM setup by using one of the following values:

  • Red Hat Enterprise Linux AI: rhelai
  • Red Hat OpenShift AI: rhoai
  • OpenAI: openai
  • Microsoft Azure OpenAI: azure

Optional

rhoai

N/A

lightspeed_chatbot_model_extra_settings

Use this parameter to pass a JSON dictionary of extra parameters to pass directly to the model provider, for settings not covered by other standard fields.

If you want to use Microsoft Azure OpenAI as the LLM provider, specify the value as '{"api_type": ""}'.

Optional

{}

N/A lightspeed_chatbot_agent_extra_settings Use this parameter to customize agent behavior, such as controlling the temperature of the LLM. For example,

'{"chatbot_temperature_override": 1}'.

Optional {}

N/A

lightspeed_chatbot_chatbot_max_tokens

Maximum number of tokens to generate a chat response.

Optional

4096

N/A

lightspeed_chatbot_http_port

Port number that Ansible Lightspeed intelligent assistant listens on for HTTP requests.

Optional

8085

N/A

lightspeed_chatbot_model_id

The ID of the LLM model that is configured on your LLM setup.

Optional

N/A

lightspeed_chatbot_model_api_key

The API token or the API key of your LLM setup. This token is sent along with the authorization header when an inference API is called.

Optional

Ansible Lightspeed intelligent assistant integration with MCP server variables

Inventory file variables for Ansible Lightspeed intelligent assistant integration with Model Context Protocol (MCP) server.

Expand
RPM variable name Container variable name Description Required or optional Default

N/A

lightspeed_mcp_controller_enabled

Controls whether the Ansible Lightspeed MCP controller is enabled or disabled.

Optional

false

N/A

lightspeed_mcp_controller_port

Ansible Lightspeed MCP controller port.

Optional

8004

N/A

lightspeed_mcp_lightspeed_enabled

Ansible Lightspeed MCP lightspeed enabled.

Optional

false

N/A

lightspeed_mcp_lightspeed_port

Ansible Lightspeed MCP lightspeed port.

Optional

8005

Change your LLM model

To change the LLM model for your containerized Ansible Automation Platform deployment of Ansible Lightspeed intelligent assistant, you must edit the inventory file with the specific details of your new LLM provider and then rerun the install playbook.

Procedure
  1. Edit the inventory file to update the following Ansible Lightspeed intelligent assistant variables with the specific details of your required LLM provider:
    • lightspeed_chatbot_model_url
    • lightspeed_chatbot_model_api_key
    • lightspeed_chatbot_model_id
    • lightspeed_chatbot_default_provider
  2. Rerun the install playbook to install the containerized Ansible Automation Platform.

Use the Ansible Lightspeed intelligent assistant

After you deploy the Ansible Lightspeed intelligent assistant, all Ansible users within the organization can access and use the chat interface to ask questions and receive information about the Ansible Automation Platform.

Access the Ansible Lightspeed intelligent assistant

  1. Log in to the Ansible Automation Platform.
  2. Click the Ansible Lightspeed intelligent assistant icon Ansible Lightspeed intelligent assistant icon that is displayed at the top right corner of the taskbar.

    The Ansible Lightspeed intelligent assistant window opens with a welcome message, as shown in the following image:

    Ansible Lightspeed intelligent assistant

Use the Ansible Lightspeed intelligent assistant

You can perform the following tasks:

  • Ask questions in the prompt field and get answers about the Ansible Automation Platform Note

    If you are using an IBM Granite 3.3 series AI model, you might experience a delay of about one minute when waiting for a chat response. To resolve this error, restart the chat session.

  • View the chat history of all conversations in a chat session.
  • Search the chat history using a user prompt or answer. The chat history is deleted when you close an existing chat session or log out from the Ansible Automation Platform.
  • Restore an earlier chat by clicking the relevant entry from the chat history.
  • Give feedback on the quality of the chat answers, by clicking the Thumbs up or Thumbs down icon.
  • Copy and record the answers by clicking the Copy icon.
  • Change the mode of the virtual assistant to dark or light mode, by clicking the Sun icon Sun icon from the top right corner of the toolbar.
  • Clear the context of an existing chat by using the New chat button in the chat history.
  • Close the chat interface while working on the Ansible Automation Platform.

Configure the Ansible VS Code extension

If you deployed the Ansible Lightspeed coding assistant, you must also configure the Ansible VS Code extension with the generated Ansible Lightspeed URL. This configuration enables the Ansible users in your organization to use the Ansible Lightspeed coding assistant to create Ansible content.

Before you begin

  • You have installed VS Code version 1.70.1 or later.
  • Your organization administrator has configured an IBM watsonx Code Assistant model for your organization.
  • Your network or firewall configuration permits ingress traffic on port 8447. This port is required for containerized installations to connect IBM watsonx Code Assistant with the Ansible VS Code extension, which then provides access to the Ansible Lightspeed coding assistant.

About this task

Procedure

  1. Open the VS Code application.
  2. From the Activity bar, click the Extensions icon Extensions.
  3. From the Installed Extensions list, select Ansible.
  4. From the Ansible extension page, click the Settings icon and select Extension Settings.
  5. Select Ansible Lightspeed settings, and specify the following information:
    1. Ensure that the Enable Ansible Lightspeed with watsonx Code Assistant inline suggestions checkbox is selected.
    2. In the URL for Ansible Lightspeed field, verify that you have the following URL: https://<node.ansible.com>:8447.
    3. Select the Enable Ansible Lightspeed with watsonx Code Assistant inline suggestions checkbox.
  6. Optional: If you want to use the custom model instead of the default model, in the Model ID Override field, enter the custom model ID. The model-override setting enables you to override the default model and use the custom model, after your organization administrator has created a custom model and has shared the model ID with you separately.

    Your settings are automatically saved in VS Code.

    Note

    If your organization recently subscribed to the Red Hat Ansible Automation Platform, it might take a few hours for Red Hat Ansible Lightspeed to detect the new subscription. In VS Code, use the Refresh button in the Ansible extension from the Activity bar to check again.

What to do next

Deploy the Ansible Lightspeed intelligent assistant on OpenShift Container Platform

As a system administrator, you can deploy Ansible Lightspeed intelligent assistant on Ansible Automation Platform 2.6 on OpenShift Container Platform.

Overview

Install and use Ansible Lightspeed intelligent assistant on Ansible Automation Platform 2.6 on OpenShift Container Platform. An intuitive chat interface, embedded in Ansible Automation Platform, it uses generative artificial intelligence (AI) to answer questions about Ansible Automation Platform.

The Ansible Lightspeed intelligent assistant interacts with users in their natural language prompts in English, and uses Large Language Models (LLMs) to generate quick, accurate, and personalized responses. These responses empower Ansible users to work more efficiently, thereby improving productivity and the overall quality of their work.

Ansible Lightspeed intelligent assistant requires the following configurations:

  • Installation of Ansible Automation Platform 2.6 on Red Hat OpenShift Container Platform
  • Deployment of an LLM provider served by either a Red Hat AI platform or a third-party AI platform. To know the LLM providers that you can use, see LLM Providers below.
Important

Red Hat does not collect any telemetry data from your interactions with the Ansible Lightspeed intelligent assistant.

Upgrading from Ansible Automation Platform 2.5 to 2.6.1 or 2.6 to 2.6.1 enables HTTPS and TLS by default for internal communication between the Ansible Lightspeed API and the Ansible Lightspeed intelligent assistant pod. Following the upgrade to Ansible Automation Platform 2.6.1, the intelligent assistant will be unavailable for approximately 60 seconds while its pod restarts.

Integration with MCP server

Ansible Lightspeed intelligent assistant integration with the Model Context Protocol (MCP) server is available as a Technology Preview release. This integration enhances the user experience by delivering relevant, dynamically sourced data results to your queries.

MCP is an open protocol that standardizes how applications provide context to LLMs. Using the protocol, an MCP server provides a standardized way for an LLM to increase context by requesting and receiving real-time information from external resources. The integration with an MCP server enables the Ansible Lightspeed intelligent assistant to offer an enhanced user experience by delivering relevant, dynamically sourced data results to your queries. You can configure a MCP server in the chatbot configuration secret.

Note

Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

Ansible Automation Platform 2.6 requirements
  • You have installed Ansible Automation Platform 2.6 on your OpenShift Container Platform environment.
  • You have administrator privileges for the Ansible Automation Platform.
  • You have provisioned an OpenShift cluster with Operator Lifecycle Management installed.
Large Language Model (LLM) provider requirements

You must have configured an LLM provider that you will use before deploying the Ansible Lightspeed intelligent assistant.

An LLM is a type of machine learning model that can interpret and generate human-like language. When an LLM is used with the Ansible Lightspeed intelligent assistant, the LLM can interpret questions accurately and provide helpful answers in a conversational manner. Your LLM must have tool calling enabled to handle tool-related requests. Tool calling allows the assistant to interact with platform services and execute complex workflows.

Ansible Lightspeed intelligent assistant can rely on the following LLM providers:

  • Red Hat LLM providers:
    • Red Hat Enterprise Linux AI

      You can configure Red Hat Enterprise Linux AI as the LLM provider. As the Red Hat Enterprise Linux is in a different environment than the Ansible Lightspeed deployment, the model deployment must allow access using a secure connection.

      Ansible Lightspeed intelligent assistant supports vLLM Server. When self-hosting an LLM with Red Hat Enterprise Linux AI, you can use vLLM Server as the inference engine.

    • Red Hat OpenShift AI

      You must deploy an LLM on the Red Hat OpenShift AI single-model serving platform that uses the Virtual Large Language Model (vLLM) runtime. If the model deployment resides in a different OpenShift environment than the Ansible Lightspeed deployment, include a route to expose the model deployment outside the cluster.

      Ansible Lightspeed intelligent assistant supports vLLM Server. When self-hosting an LLM with Red Hat OpenShift AI, you can use vLLM Server as the inference engine.

      Note

      For configurations with Red Hat Enterprise Linux AI or Red Hat OpenShift AI, you must host your own LLM provider instead of using a SaaS LLM provider.

    • Red Hat AI Inference Server

      You can deploy an LLM using Red Hat AI Inference Server as your inference runtime. Red Hat AI Inference Server supports vLLM runtimes for efficient model serving and can be configured to work with Ansible Lightspeed intelligent assistant.

      If the Red Hat AI Inference Server deployment is in a different environment than the Ansible Lightspeed deployment, ensure the model deployment allows access using a secure connection and configure appropriate network routing.

      Ansible Lightspeed intelligent assistant supports vLLM Server when self-hosting an LLM with Red Hat AI Inference Server as the inference engine.

  • Third-party LLM providers:
    • OpenAI

      To use OpenAI with the Ansible Lightspeed intelligent assistant, you need access to the OpenAI API platform.

    • Microsoft Azure OpenAI

      To use Microsoft Azure with the Ansible Lightspeed intelligent assistant, you need access to Microsoft Azure OpenAI.

Process for configuring and using the Ansible Lightspeed intelligent assistant

Perform the following tasks to set up and use the Ansible Lightspeed intelligent assistant in your Ansible Automation Platform instance on the OpenShift Container Platform environment:

Expand
Task Description

Deploy the Ansible Lightspeed intelligent assistant on OpenShift Container Platform

An Ansible Automation Platform administrator who wants to deploy the Ansible Lightspeed intelligent assistant for all Ansible users in the organization.

Perform the following tasks:

  1. Ceate a chatbot configuration secret.
  2. Update the YAML file of the Ansible Automation Platform to use the chatbot connection secret.
  3. Optional: Change your LLM model if you want to use a different LLM provider after deploying Red Hat Ansible Lightspeed.

Access and use the Ansible Lightspeed intelligent assistant

All Ansible users who want to use the intelligent assistant to get answers to their questions about the Ansible Automation Platform.