Chapter 6. Deploying the Ansible Lightspeed intelligent assistant on OpenShift Container Platform


As a system administrator, you can deploy Ansible Lightspeed intelligent assistant on Ansible Automation Platform 2.5 on OpenShift Container Platform.

6.1. Overview

The Ansible Lightspeed intelligent assistant is available on Ansible Automation Platform 2.5 on OpenShift Container Platform as a Technology Preview release. It is an intuitive chat interface embedded within the Ansible Automation Platform, using generative artificial intelligence (AI) to answer questions about the Ansible Automation Platform.

The Ansible Lightspeed intelligent assistant interacts with users in their natural language prompts in English, and uses Large Language Models (LLMs) to generate quick, accurate, and personalized responses. These responses empower Ansible users to work more efficiently, thereby improving productivity and the overall quality of their work.

Ansible Lightspeed intelligent assistant requires the following configurations:

  • Installation of Ansible Automation Platform 2.5 on Red Hat OpenShift Container Platform
  • Deployment of an LLM served by either a Red Hat AI platform or a third-party AI platform. To know the LLM providers that you can use, see LLM providers.
Important
  • Red Hat does not collect any telemetry data from your interactions with the Ansible Lightspeed intelligent assistant.
  • Ansible Lightspeed intelligent assistant is available as a Technology Preview feature only.

    Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

    For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Upgrading from Ansible Automation Platform 2.5 to 2.6.1 or 2.6 to 2.6.1 enables HTTPS and TLS by default for internal communication between the Ansible Lightspeed API and the Ansible Lightspeed intelligent assistant pod. Following the upgrade to Ansible Automation Platform 2.6.1, the intelligent assistant will be unavailable for approximately 60 seconds while its pod restarts.

6.1.1. Integration with MCP server

Ansible Lightspeed intelligent assistant integration with the Model Context Protocol (MCP) server is available as a Technology Preview release. This integration enhances the user experience by delivering relevant, dynamically sourced data results to your queries.

MCP is an open protocol that standardizes how applications provide context to LLMs. Using the protocol, an MCP server provides a standardized way for an LLM to increase context by requesting and receiving real-time information from external resources. The integration with an MCP server enables the Ansible Lightspeed intelligent assistant to offer an enhanced user experience by delivering relevant, dynamically sourced data results to your queries. You can configure a MCP server in the chatbot configuration secret. For more information, see Creating a chatbot configuration secret.

Note

Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

6.1.2. Ansible Automation Platform requirements

  • You have installed Ansible Automation Platform 2.5 on your OpenShift Container Platform environment.
  • You have administrator privileges for the Ansible Automation Platform.
  • You have provisioned an OpenShift cluster with Operator Lifecycle Management installed.

6.2. Large Language Model (LLM) provider requirements

You must have configured an LLM provider that you will use before deploying the Ansible Lightspeed intelligent assistant.

An LLM is a type of machine learning model that can interpret and generate human-like language. When an LLM is used with the Ansible Lightspeed intelligent assistant, the LLM can interpret questions accurately and provide helpful answers in a conversational manner.

As part of the Technology Preview release, Ansible Lightspeed intelligent assistant can rely on the following Software as a Service (SaaS) LLM providers:

Red Hat LLM providers

  • Red Hat Enterprise Linux AI

    Red Hat Enterprise Linux AI is OpenAI API-compatible and is configured in a similar manner to the OpenAI provider. You can configure Red Hat Enterprise Linux AI as the LLM provider. For more information, see the Red Hat Enterprise Linux AI product page.

  • Red Hat OpenShift AI

    Red Hat OpenShift AI is OpenAI API-compatible and is configured in a similar manner to the OpenAI provider. You can configure Red Hat OpenShift AI as the LLM provider. For more information, see the Red Hat OpenShift AI product page.

Note

For configurations with Red Hat Enterprise Linux AI or Red Hat OpenShift AI, you must host your own LLM provider instead of using a SaaS LLM provider.

Third-party LLM providers

  • IBM watsonx.ai

    To use IBM watsonx with the Ansible Lightspeed intelligent assistant, you need an account with IBM watsonx.ai.

  • OpenAI

    To use OpenAI with the Ansible Lightspeed intelligent assistant, you need access the OpenAI API platform.

  • Microsoft Azure OpenAI

    To use Microsoft Azure with the Ansible Lightspeed intelligent assistant, you need access to Microsoft Azure OpenAI.

    Note

    Many self-hosted or self-managed model servers claim API compatibility with OpenAI. It is possible to configure the Ansible Lightspeed intelligent assistant OpenAI provider to point to an API-compatible model server. If the model server is truly API-compatible, especially with respect to authentication, then it might work. These configurations have not been tested by Red Hat, and issues related to their use are outside the scope of Technology Preview support.

6.3. Process for configuring and using the Ansible Lightspeed intelligent assistant

Perform the following tasks to set up and use the Ansible Lightspeed intelligent assistant in your Ansible Automation Platform instance on the OpenShift Container Platform environment:

Expand
TaskDescription

Deploy the Ansible Lightspeed intelligent assistant on OpenShift Container Platform

An Ansible Automation Platform administrator who wants to deploy the Ansible Lightspeed intelligent assistant for all Ansible users in the organization.

Perform the following tasks:

  1. Create a chatbot configuration secret.
  2. Update the YAML file of the Ansible Automation Platform to use the chatbot connection secret.
  3. Optional: Change your LLM model if you want to use a different LLM provider after deploying Red Hat Ansible Lightspeed.

Access and use the Ansible Lightspeed intelligent assistant

All Ansible users who want to use the intelligent assistant to get answers to their questions about the Ansible Automation Platform. For more details, see Using the Ansible Lightspeed intelligent assistant.

6.4. Deploying the Ansible Lightspeed intelligent assistant

This section provides information about the procedures involved in deploying the Ansible Lightspeed intelligent assistant on OpenShift Container Platform.

6.4.1. Creating a chatbot configuration secret

Create a configuration secret for the Ansible Lightspeed intelligent assistant, so that you can connect the intelligent assistant to the Ansible Automation Platform operator.

Procedure

  1. Log in to Red Hat OpenShift Container Platform as an administrator.
  2. Navigate to Workloads Secrets.
  3. From the Projects list, select the namespace that you created when you installed the Ansible Automation Platform operator.
  4. Click Create Key/value secret.
  5. In the Secret name field, enter a unique name for the secret. For example, chatbot-configuration-secret.
  6. Add the following keys and their associated values individually:

    Expand
    KeyValue

    Settings for all LLM setups

    chatbot_model

    Enter the LLM model name that is configured on your LLM setup.

    chatbot_url

    Enter the inference API base URL on your LLM setup. For example, https://your_inference_api/v1.

    chatbot_token

    Enter the API token or the API key. This token is sent along with the authorization header when an inference API is called.

    chatbot_llm_provider_type

    Optional

    Enter the provider type of your LLM setup by using one of the following values:

    • Red Hat Enterprise Linux AI: rhelai_vllm
    • Red Hat OpenShift AI: rhoai_vllm (Default value)
    • IBM watsonx.ai: watsonx
    • OpenAI: openai
    • Microsoft Azure OpenAI: azure_openai

    chatbot_context_window_size

    Optional

    Enter a value to configure the context window length for your LLM setup.

    Default= 128000

    chatbot_temperature_override

    Optional

    A lower temperature generates predictable results, while a higher temperature allows more diverse or creative responses.

    Enter one of the following values:

    • 0: Least creativity and randomness in the responses.
    • 1: Maximum creativity and randomness in the responses.
    • null: Override or disable the default temperature setting.

      Note

      A few OpenAI o-series models (o1, o3-mini, and o4-mini models) do not support the temperature settings. Therefore, you must set the value to null to use these OpenAI models.

    Additional setting for IBM watsonx.ai only

    chatbot_llm_provider_project_id

    Enter the project ID of your IBM watsonx setup.

    Additional settings for Microsoft Azure OpenAI only

    chatbot_azure_deployment_name

    Enter the deployment name of your Microsoft Azure OpenAI setup.

    chatbot_azure_api_version

    Optional

    Enter the API version of your Microsoft Azure OpenAI setup.

  7. Click Create. The chatbot authorization secret is successfully created.

6.4.2. Updating the YAML file of the Ansible Automation Platform operator

After you create the chatbot authorization secret, you must update the YAML file of the Ansible Automation Platform operator to use the secret.

Procedure

  1. Log in to Red Hat OpenShift Container Platform as an administrator.
  2. Navigate to Operators Installed Operators.
  3. From the list of installed operators, select the Ansible Automation Platform operator.
  4. Locate and select the Ansible Automation Platform custom resource, and then click the required app.
  5. Select the YAML tab.
  6. Scroll the text to find the spec: section, and add the following details under the spec: section:

    spec:
      lightspeed:
        disabled: false
        chatbot_config_secret_name: <name of your chatbot configuration secret>
    Copy to Clipboard Toggle word wrap
  7. Click Save. The Ansible Lightspeed intelligent assistant service takes a few minutes to set up.

Verification

  1. Verify that the chat interface service is running successfully:

    1. Navigate to Workloads Pods.
    2. Filter with the term api and ensure that the following APIs are displayed in Running status:

      • myaap-lightspeed-api-<version number>
      • myaap-lightspeed-chatbot-api-<version number>
  2. Verify that the chat interface is displayed on the Ansible Automation Platform:

    1. Access the Ansible Automation Platform:

      1. Navigate to Operators Installed Operators.
      2. From the list of installed operators, click Ansible Automation Platform.
      3. Locate and select the Ansible Automation Platform custom resource, and then click the app that you created.
      4. From the Details tab, record the information available in the following fields:

        • URL: This is the URL of your Ansible Automation Platform instance.
        • Gateway Admin User: This is the username to log into your Ansible Automation Platform instance.
        • Gateway Admin password: This is the password to log into your Ansible Automation Platform instance.
      5. Log in to the Ansible Automation Platform using the URL, username, and password that you recorded earlier.
    2. Access the Ansible Lightspeed intelligent assistant:

      1. Click the Ansible Lightspeed intelligent assistant icon Ansible Lightspeed intelligent assistant icon that is displayed at the top right corner of the taskbar.
      2. Verify that the chat interface is displayed, as shown in the following image:

        Ansible Lightspeed intelligent assistant .

6.4.3. Changing your LLM model

If you have already deployed Ansible Lightspeed intelligent assistant but want to change your LLM model, you can create a new chatbot configuration secret for the new LLM model.

Alternatively, if you want to use the same chatbot configuration secret, you must delete and redeploy the Ansible Lightspeed intelligent assistant.

Procedure

  • To create and use a new chatbot configuration secret:

    1. Create a new chatbot configuration secret with a different name for the new LLM model.
    2. Update the YAML file of the Ansible Automation Platform operator with the new chatbot configuration secret name.

      The Ansible Automation Platform operator operator detects the new configuration and redeploys the Ansible Lightspeed intelligent assistant.

    3. Verify that the chat interface service is running successfully. See the verification steps mentioned in the topic Update the YAML file of the Ansible Automation Platform operator.

      Important

      Do not update the existing chatbot configuration secret with the new LLM model, as the reconciliation logic does not check the updates made to the secret.

  • To use the same chatbot secret by deleting and redeploying the Ansible Lightspeed intelligent assistant:

    1. Disable the Ansible Lightspeed operator instance:

      1. Navigate to Operators Installed Operators.
      2. From the list of installed operators, select Ansible Automation Platform.
      3. Locate and select the Ansible Automation Platform custom resource.
      4. Select the YAML tab and under the spec: section for lightspeed category, specify disabled:true.
      5. Click Save.
    2. Delete the Ansible Lightspeed operator instance:

      1. Navigate to Operators Installed Operators.
      2. From the list of installed operators, select Ansible Lightspeed and delete the operator.
    3. Re-enable the Ansible Automation Platform instance:

      1. Navigate to Operators Installed Operators.
      2. From the list of installed operators, select Ansible Automation Platform.
      3. Locate and select the Ansible Automation Platform custom resource.
      4. Select the YAML tab and under the spec: section for lightspeed category, specify disabled:false.
      5. Click Save.

6.4.4. Using the Ansible Lightspeed intelligent assistant

After you deploy the Ansible Lightspeed intelligent assistant, all Ansible users within the organization can access and use the chat interface to ask questions and receive information about the Ansible Automation Platform.

6.4.4.1. Accessing the Ansible Lightspeed intelligent assistant

  1. Log in to the Ansible Automation Platform.
  2. Click the Ansible Lightspeed intelligent assistant icon Ansible Lightspeed intelligent assistant icon that is displayed at the top right corner of the taskbar.

    The Ansible Lightspeed intelligent assistant window opens with a welcome message, as shown in the following image:

    Ansible Lightspeed intelligent assistant

6.4.4.2. Using the Ansible Lightspeed intelligent assistant

You can perform the following tasks:

  • Ask questions in the prompt field and get answers about the Ansible Automation Platform
  • View the chat history of all conversations in a chat session
  • Search the chat history using a user prompt or answer

    The chat history is deleted when you close an existing chat session or log out from the Ansible Automation Platform.

  • Restore a previous chat by clicking the relevant entry from the chat history
  • Provide feedback on the quality of the chat answers, by clicking the Thumbs up or Thumbs down icon
  • Copy and record the answers by clicking the Copy icon
  • Change the mode of the virtual assistant to dark or light mode, by clicking the Sun icon Sun icon from the top right corner of the toolbar
  • Clear the context of an existing chat by using the New chat button in the chat history
  • Close the chat interface while working on the Ansible Automation Platform
맨 위로 이동
Red Hat logoGithubredditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다. 최신 업데이트를 확인하세요.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

Theme

© 2026 Red Hat