Share Feedback to help us keep improving.
Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 6. Deploying the Ansible Lightspeed intelligent assistant on OpenShift Container Platform
As a system administrator, you can deploy Ansible Lightspeed intelligent assistant on Ansible Automation Platform 2.6 on OpenShift Container Platform.
6.1. Overview Link kopierenLink in die Zwischenablage kopiert!
You can install and use Ansible Lightspeed intelligent assistant on Ansible Automation Platform 2.6 on OpenShift Container Platform. Ansible Lightspeed intelligent assistant is an intuitive chat interface embedded within the Ansible Automation Platform, using generative artificial intelligence (AI) to answer questions about the Ansible Automation Platform.
The Ansible Lightspeed intelligent assistant interacts with users in their natural language prompts in English, and uses Large Language Models (LLMs) to generate quick, accurate, and personalized responses. These responses empower Ansible users to work more efficiently, thereby improving productivity and the overall quality of their work.
Ansible Lightspeed intelligent assistant requires the following configurations:
- Installation of Ansible Automation Platform 2.6 on Red Hat OpenShift Container Platform
- Deployment of an LLM provider served by either a Red Hat AI platform or a third-party AI platform. To know the LLM providers that you can use, see LLM providers.
Red Hat does not collect any telemetry data from your interactions with the Ansible Lightspeed intelligent assistant.
Upgrading from Ansible Automation Platform 2.5 to 2.6.1 or 2.6 to 2.6.1 enables HTTPS and TLS by default for internal communication between the Ansible Lightspeed API and the Ansible Lightspeed intelligent assistant pod. Following the upgrade to Ansible Automation Platform 2.6.1, the intelligent assistant will be unavailable for approximately 60 seconds while its pod restarts.
6.1.1. Integration with MCP server Link kopierenLink in die Zwischenablage kopiert!
Ansible Lightspeed intelligent assistant integration with the Model Context Protocol (MCP) server is available as a Technology Preview release. This integration enhances the user experience by delivering relevant, dynamically sourced data results to your queries.
MCP is an open protocol that standardizes how applications provide context to LLMs. Using the protocol, an MCP server provides a standardized way for an LLM to increase context by requesting and receiving real-time information from external resources. The integration with an MCP server enables the Ansible Lightspeed intelligent assistant to offer an enhanced user experience by delivering relevant, dynamically sourced data results to your queries. You can configure a MCP server in the chatbot configuration secret. For more information, see Creating a chatbot configuration secret.
Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
6.1.2. Ansible Automation Platform 2.6 requirements Link kopierenLink in die Zwischenablage kopiert!
- You have installed Ansible Automation Platform 2.6 on your OpenShift Container Platform environment.
- You have administrator privileges for the Ansible Automation Platform.
- You have provisioned an OpenShift cluster with Operator Lifecycle Management installed.
6.1.3. Large Language Model (LLM) provider requirements Link kopierenLink in die Zwischenablage kopiert!
You must have configured an LLM provider that you will use before deploying the Ansible Lightspeed intelligent assistant.
An LLM is a type of machine learning model that can interpret and generate human-like language. When an LLM is used with the Ansible Lightspeed intelligent assistant, the LLM can interpret questions accurately and provide helpful answers in a conversational manner.
Ansible Lightspeed intelligent assistant can rely on the following LLM providers:
Red Hat LLM providers:
Red Hat Enterprise Linux AI
You can configure Red Hat Enterprise Linux AI as the LLM provider. As the Red Hat Enterprise Linux is in a different environment than the Ansible Lightspeed deployment, the model deployment must allow access using a secure connection. For more information, see Optional: Allowing access to a model from a secure endpoint.
Ansible Lightspeed intelligent assistant supports vLLM Server. When self-hosting an LLM with Red Hat Enterprise Linux AI, you can use vLLM Server as the inference engine.
Red Hat OpenShift AI
You must deploy an LLM on the Red Hat OpenShift AI single-model serving platform that uses the Virtual Large Language Model (vLLM) runtime. If the model deployment resides in a different OpenShift environment than the Ansible Lightspeed deployment, include a route to expose the model deployment outside the cluster. For more information, see About the single-model serving platform.
Ansible Lightspeed intelligent assistant supports vLLM Server. When self-hosting an LLM with Red Hat OpenShift AI, you can use vLLM Server as the inference engine.
NoteFor configurations with Red Hat Enterprise Linux AI or Red Hat OpenShift AI, you must host your own LLM provider instead of using a SaaS LLM provider.
Red Hat AI Inference Server
You can deploy an LLM using Red Hat AI Inference Server as your inference runtime. Red Hat AI Inference Server supports vLLM runtimes for efficient model serving and can be configured to work with Ansible Lightspeed intelligent assistant. For more information, see Red Hat AI Inference Server documentation.
If the Red Hat AI Inference Server deployment is in a different environment than the Ansible Lightspeed deployment, ensure the model deployment allows access using a secure connection and configure appropriate network routing.
Ansible Lightspeed intelligent assistant supports vLLM Server when self-hosting an LLM with Red Hat AI Inference Server as the inference engine.
Third-party LLM providers:
OpenAI
To use OpenAI with the Ansible Lightspeed intelligent assistant, you need access to the OpenAI API platform.
Microsoft Azure OpenAI
To use Microsoft Azure with the Ansible Lightspeed intelligent assistant, you need access to Microsoft Azure OpenAI product page.
6.1.4. Process for configuring and using the Ansible Lightspeed intelligent assistant Link kopierenLink in die Zwischenablage kopiert!
Perform the following tasks to set up and use the Ansible Lightspeed intelligent assistant in your Ansible Automation Platform instance on the OpenShift Container Platform environment:
| Task | Description |
|---|---|
| Deploy the Ansible Lightspeed intelligent assistant on OpenShift Container Platform | An Ansible Automation Platform administrator who wants to deploy the Ansible Lightspeed intelligent assistant for all Ansible users in the organization. Perform the following tasks:
|
| Access and use the Ansible Lightspeed intelligent assistant | All Ansible users who want to use the intelligent assistant to get answers to their questions about the Ansible Automation Platform. For more details, see Using the Ansible Lightspeed intelligent assistant. |
6.2. Deploying the Ansible Lightspeed intelligent assistant Link kopierenLink in die Zwischenablage kopiert!
This section provides information about the procedures involved in deploying the Ansible Lightspeed intelligent assistant on OpenShift Container Platform.
6.2.1. Creating a chatbot configuration secret Link kopierenLink in die Zwischenablage kopiert!
Create a configuration secret for the Ansible Lightspeed intelligent assistant, so that you can connect the intelligent assistant to the Ansible Automation Platform operator.
Procedure
- Log in to Red Hat OpenShift Container Platform as an administrator.
-
Navigate to
. - From the Projects list, select the namespace that you created when you installed the Ansible Automation Platform operator.
-
Click
. -
In the Secret name field, enter a unique name for the secret. For example,
chatbot-configuration-secret. Add the following keys and their associated values individually:
Expand Key Value Settings for all LLM setups
chatbot_modelEnter the LLM model name that is configured on your LLM setup.
chatbot_urlEnter the inference API base URL on your LLM setup. For example,
https://your_inference_api/v1.chatbot_tokenEnter the API token or the API key. This token is sent along with the authorization header when an inference API is called.
chatbot_llm_provider_typeOptional
Enter the value as per the provider type of your LLM setup:
-
Red Hat Enterprise Linux AI:
rhelai_vllm -
Red Hat OpenShift AI:
rhoai_vllm -
OpenAI:
openai -
Microsoft Azure OpenAI:
azure_openai
chatbot_model_config_extrasOptional
Use this field to pass a JSON dictionary of extra parameters to pass directly to the model provider, for settings not covered by other standard fields.
For example, you can specify a parameter
api_versionfor Microsoft Azure OpenAI in the JSON format'{"api_version": "<your API version>"}'.Additional settings for MCP server configuration
-
aap_gateway_url -
aap_controller_url
Configure a Model Context Protocol (MCP) server that interfaces with the Ansible Lightspeed intelligent assistant.
The values
aap_gateway_urlandaap_controller_urlare internal URLs accessible to the platform gateway and automation controller services on the OpenShift cluster. For example, if the name of your Ansible Automation Platform custom resource ismyaap, these URLs will be:-
aap_gateway_url:http://myaap -
aap_controller_url:http://myaap-controller-service
For MCP server configuration:
- If none of these parameters are configured, no MCP server is provisioned or registered with the underlying LLM’s tool at runtime.
-
If you configure the
aap_gateway_urlparameter only, the Ansible Lightspeed Service MCP server is provisioned. Authentication attempts to use the JSON Web Token (JWT) token associated with the user’s authenticated context. -
If you configure both parameters
aap_gateway_urlandaap_controller_url, the Ansible Lightspeed Service MCP server and Ansible Automation Platform Controller Service MCP server are both configured. Authentication attempts to use the JWT token associated with the user’s authenticated context.
-
Red Hat Enterprise Linux AI:
- Click Create. The chatbot authorization secret is successfully created.
6.2.2. Updating the YAML file of the Ansible Automation Platform operator Link kopierenLink in die Zwischenablage kopiert!
After you create the chatbot authorization secret, you must update the YAML file of the Ansible Automation Platform operator to use the secret.
Procedure
- Log in to Red Hat OpenShift Container Platform as an administrator.
-
Navigate to
. - From the list of installed operators, select the Ansible Automation Platform operator.
- Locate and select the Ansible Automation Platform custom resource, and then click the required app.
- Select the YAML tab.
Scroll the text to find the
spec:section, and add the following details under thespec:section:spec: lightspeed: disabled: false chatbot_config_secret_name: <name of your chatbot configuration secret>Click Save. The Ansible Lightspeed intelligent assistant service takes a few minutes to set up.
NoteUpgrading from Ansible Automation Platform 2.5 to 2.6.1 enables HTTPS and enables TLS by default for internal communication between the Ansible Lightspeed API and the Ansible Lightspeed intelligent assistant pod. Following the upgrade to Ansible Automation Platform 2.6.1, the intelligent assistant will be unavailable for approximately 60 seconds while its pod restarts.
Verification
Verify that the chat interface service is running successfully:
-
Navigate to
. Filter with the term api and ensure that the following APIs are displayed in Running status:
-
myaap-lightspeed-api-<version number> -
myaap-lightspeed-chatbot-api-<version number>
-
-
Navigate to
Verify the MCP server configuration if you specified either
aap_gateway_urloraap_controller_urlparameter:Open the
lightspeed-chatbot-apipod and click the Containers section.-
If the
ansible-mcp-lightspeedcontainer is displayed, the Ansible Lightspeed MCP server is running. -
If the
ansible-mcp-controllercontainer is displayed, the Ansible Automation Platform Controller Service MCP server is running.
-
If the
Verify that the chat interface is displayed on the Ansible Automation Platform:
Access the Ansible Automation Platform:
-
Navigate to
. - From the list of installed operators, click Ansible Automation Platform.
- Locate and select the Ansible Automation Platform custom resource, and then click the app that you created.
From the Details tab, record the information available in the following fields:
- URL: This is the URL of your Ansible Automation Platform instance.
- Gateway Admin User: This is the username to log into your Ansible Automation Platform instance.
- Gateway Admin password: This is the password to log into your Ansible Automation Platform instance.
- Log in to the Ansible Automation Platform using the URL, username, and password that you recorded earlier.
-
Navigate to
Access the Ansible Lightspeed intelligent assistant:
-
Click the Ansible Lightspeed intelligent assistant icon
that is displayed at the top right corner of the taskbar.
Verify that the chat interface is displayed, as shown in the following image:
.
-
Click the Ansible Lightspeed intelligent assistant icon
6.2.3. Changing your LLM model Link kopierenLink in die Zwischenablage kopiert!
If you have already deployed Ansible Lightspeed intelligent assistant but want to change your LLM model, you can create a new chatbot configuration secret for the new LLM model.
Alternatively, if you want to use the same chatbot configuration secret, you must delete and redeploy the Ansible Lightspeed intelligent assistant.
Procedure
To create and use a new chatbot configuration secret:
- Create a new chatbot configuration secret with a different name for the new LLM model.
Update the YAML file of the Ansible Automation Platform operator with the new chatbot configuration secret name.
The Ansible Automation Platform operator detects the new configuration and redeploys the Ansible Lightspeed intelligent assistant.
Verify that the chat interface service is running successfully. See the verification steps mentioned in the topic Update the YAML file of the Ansible Automation Platform operator.
ImportantDo not update the existing chatbot configuration secret with the new LLM model, as the reconciliation logic does not check the updates made to the secret.
To use the same chatbot secret by deleting and redeploying the Ansible Lightspeed intelligent assistant:
Disable the Ansible Lightspeed operator instance:
-
Navigate to
. - From the list of installed operators, select Ansible Automation Platform.
- Locate and select the Ansible Automation Platform custom resource.
-
Select the YAML tab and under the
spec:section forlightspeedcategory, specifydisabled:true. - Click Save.
-
Navigate to
Delete the Ansible Lightspeed operator instance:
-
Navigate to
. - From the list of installed operators, select Ansible Lightspeed and delete the operator.
-
Navigate to
Re-enable the Ansible Automation Platform instance:
-
Navigate to
. - From the list of installed operators, select Ansible Automation Platform.
- Locate and select the Ansible Automation Platform custom resource.
-
Select the YAML tab and under the
spec:section forlightspeedcategory, specifydisabled:false. - Click Save.
-
Navigate to
6.2.4. Using the Ansible Lightspeed intelligent assistant Link kopierenLink in die Zwischenablage kopiert!
After you deploy the Ansible Lightspeed intelligent assistant, all Ansible users within the organization can access and use the chat interface to ask questions and receive information about the Ansible Automation Platform.
6.2.4.1. Accessing the Ansible Lightspeed intelligent assistant Link kopierenLink in die Zwischenablage kopiert!
- Log in to the Ansible Automation Platform.
Click the Ansible Lightspeed intelligent assistant icon
that is displayed at the top right corner of the taskbar.
The Ansible Lightspeed intelligent assistant window opens with a welcome message, as shown in the following image:
6.2.4.2. Using the Ansible Lightspeed intelligent assistant Link kopierenLink in die Zwischenablage kopiert!
You can perform the following tasks:
Ask questions in the prompt field and get answers about the Ansible Automation Platform
NoteIf you are using an IBM Granite 3.3 series AI model, you might experience a delay of about one minute when waiting for a chat response. To resolve this error, restart the chat session.
- View the chat history of all conversations in a chat session.
- Search the chat history using a user prompt or answer. The chat history is deleted when you close an existing chat session or log out from the Ansible Automation Platform.
- Restore an earlier chat by clicking the relevant entry from the chat history.
- Give feedback on the quality of the chat answers, by clicking the Thumbs up or Thumbs down icon.
- Copy and record the answers by clicking the Copy icon.
-
Change the mode of the virtual assistant to dark or light mode, by clicking the Sun icon
from the top right corner of the toolbar.
- Clear the context of an existing chat by using the New chat button in the chat history.
- Close the chat interface while working on the Ansible Automation Platform.