Chapter 1. About OpenShift Lightspeed
Red Hat OpenShift Lightspeed is a generative AI service that helps developers and administrators solve problems by providing context-aware recommendations for OpenShift Container Platform.
1.1. OpenShift Lightspeed overview Copy linkLink copied to clipboard!
Use Red Hat OpenShift Lightspeed to troubleshoot and manage OpenShift clusters through a natural-language virtual assistant in the web console.
1.1.1. About product coverage Copy linkLink copied to clipboard!
Red Hat OpenShift Lightspeed provides answers to questions by generating responses derived directly from official OpenShift Container Platform documentation.
1.1.1.1. Product exceptions Copy linkLink copied to clipboard!
The OpenShift Container Platform product documentation does not include information about all products in the Red Hat portfolio. As a result, the Red Hat OpenShift Lightspeed service uses the large language model (LLM) you provide to produce output for the following products or components:
- Builds for Red Hat OpenShift
- Red Hat Advanced Cluster Security for Kubernetes
- Red Hat Advanced Cluster Management for Kubernetes
- Red Hat CodeReady Workspaces
- Red Hat OpenShift GitOps
- Red Hat OpenShift Pipelines
- Red Hat OpenShift Serverless
- Red Hat OpenShift Service Mesh 3.x
- Red Hat Quay
1.2. OpenShift Requirements Copy linkLink copied to clipboard!
Hardware and software requirements for OpenShift Lightspeed, including supported OpenShift Container Platform versions and CPU architectures.
Telemetry is enabled on OpenShift Container Platform clusters by default.
- If the cluster has telemetry enabled, the OpenShift Lightspeed service sends conversations and feedback to Red Hat by default.
- If the cluster has telemetry disabled, the OpenShift Lightspeed service does not send conversations and feedback to Red Hat.
- If the cluster has telemetry enabled, and you do not want the OpenShift Lightspeed service to send conversations and feedback to Red Hat, you must disable telemetry.
1.2.1. Cluster resource requirements Copy linkLink copied to clipboard!
Ensure that OpenShift Lightspeed has sufficient CPU, memory, and storage allocations to maintain service performance and cluster stability without impacting other cluster workloads.
| Component | Minimum CPU (Cores) | Minimum Memory | Maximum Memory |
|---|---|---|---|
| Application server | 0.5 | 1 GB | 4 Gi |
| Postgres database | 0.3 | 300 Mi | 2 Gi |
| OpenShift Container Platform web console | 0.1 | 50 Mi | 100 Mi |
| OpenShift Lightspeed operator | 0.1 | 64 Mi | 256 Mi |
1.3. Large language model (LLM) requirements Copy linkLink copied to clipboard!
OpenShift Lightspeed supports Software as a Service (SaaS) and self-hosted large language model (LLM) providers that meet defined authentication requirements.
A large language model (LLM) is a type of machine learning model that interprets and generates human-like language. When an LLM is used with a virtual assistant, the LLM can accurately interpret questions and provide helpful answers in a conversational manner. The OpenShift Lightspeed service must have access to an LLM provider.
The service does not provide an LLM for you, so you must configure the LLM prior to installing the OpenShift Lightspeed Operator.
Red Hat does not provide support for any specific models or make suggestions or support statements pertaining to models.
The OpenShift Lightspeed service can rely on the following SaaS LLM providers:
- OpenAI
- Microsoft Azure OpenAI
- IBM watsonx
If you want to self-host a model, you can use Red Hat OpenShift AI or Red Hat Enterprise Linux AI as your model provider.
1.3.1. IBM watsonx Copy linkLink copied to clipboard!
To use IBM watsonx with Red Hat OpenShift Lightspeed, you need an account with IBM Cloud watsonx. For more information, see the Documentation for IBM watsonx as a Service.
1.3.2. Open AI Copy linkLink copied to clipboard!
To use OpenAI with Red Hat OpenShift Lightspeed, you need access to the OpenAI API platform. For more information, see the OpenAI developer platform documentation.
1.3.3. Microsoft Azure OpenAI Copy linkLink copied to clipboard!
To use Microsoft Azure with Red Hat OpenShift Lightspeed, you need access to Microsoft Azure OpenAI. For more information, see the Azure OpenAI documentation.
1.3.4. Red Hat Enterprise Linux AI Copy linkLink copied to clipboard!
Red Hat Enterprise Linux AI is OpenAI API-compatible, and is configured in a similar manner as the OpenAI provider.
You can configure Red Hat Enterprise Linux AI as the LLM provider.
Because the Red Hat Enterprise Linux is in a different environment than the OpenShift Lightspeed deployment, the model deployment must allow access using a secure connection. For more information, see Optional: Allowing access to a model from a secure endpoint.
OpenShift Lightspeed version 1.0 and later supports vLLM Server version 0.8.4 and later. When self-hosting an LLM with Red Hat Enterprise Linux AI, you can use vLLM Server as the inference engine.
1.3.5. Red Hat OpenShift AI Copy linkLink copied to clipboard!
Red Hat OpenShift AI is OpenAI API-compatible, and is configured largely the same as the OpenAI provider.
You must deploy an LLM on the Red Hat OpenShift AI single-model serving platform that uses the Virtual Large Language Model (vLLM) runtime. If the model deployment resides in a different OpenShift environment than the OpenShift Lightspeed deployment, include a route to expose the model deployment outside the cluster. For more information, see About the single-model serving platform.
OpenShift Lightspeed version 1.0 and later supports vLLM Server version 0.8.4 and later. When self-hosting an LLM with Red Hat OpenShift AI, you can use vLLM Server as the inference engine.
1.4. OpenShift Lightspeed FIPS support Copy linkLink copied to clipboard!
Red Hat OpenShift Lightspeed supports Federal Information Processing Standards (FIPS) and can be deployed on OpenShift clusters running in FIPS mode.
FIPS is a set of publicly announced standards developed by the National Institute of Standards and Technology (NIST), a part of the U.S. Department of Commerce. The primary purpose of FIPS is to ensure the security and interoperability of computer systems used by U.S. federal government agencies and their associated contractors.
When running on OpenShift Container Platform in FIPS mode, it uses the Red Hat Enterprise Linux cryptographic libraries submitted, or planned to be submitted, to NIST for FIPS validation on only the x86_64, ppc64le, and s390X architectures. For more information about the NIST validation program, see Cryptographic Module Validation Program (NIST). For the latest NIST status of the individual versions of Red Hat Enterprise Linux cryptographic libraries that have been submitted for validation, see Product compliance.
1.5. Supported architecture Copy linkLink copied to clipboard!
OpenShift Lightspeed is compatible only with OpenShift Container Platform clusters running on the x86_64 architecture.
1.6. About running OpenShift Lightspeed in disconnected mode Copy linkLink copied to clipboard!
OpenShift Lightspeed supports operation in disconnected environments that do not have full internet access.
In a disconnected environment, you must mirror the required container images into the environment. For more information, see "Mirroring in disconnected environments" in the OpenShift Container Platform product documentation.
When you mirror the images in a disconnected environment, you must list the OpenShift Lightspeed Operator when you use the oc mirror command.
1.7. About data use Copy linkLink copied to clipboard!
OpenShift Lightspeed enriches user chat messages with cluster and environment context before sending them to the configured large language model (LLM) provider for response generation.
OpenShift Lightspeed has limited capabilities to filter or redact the data and information you provide to the LLM. Do not enter data and information into the OpenShift Lightspeed interface that you do not want to send to the LLM provider.
By sending transcripts or feedback to Red Hat you agree that Red Hat can use the data for quality assurance purposes. The transcript recording data uses the back-end of the Red Hat Insights system, and is subject to the same access restrictions and other security policies.
You can email Red Hat and request that your data be deleted.
1.8. About data, telemetry, transcript, and feedback collection Copy linkLink copied to clipboard!
OpenShift Lightspeed processes natural-language messages and cluster metadata through a redaction layer before transmitting the data to your configured LLM provider.
Do not enter any information into the OpenShift Lightspeed user interface that you do not want sent to the LLM provider.
The transcript recording data uses the Red Hat Insights system back-end and is subject to the same access restrictions and other security policies described in Red Hat Insights data and application security.
1.9. Remote health monitoring overview Copy linkLink copied to clipboard!
Remote Health Monitoring uses the Telemeter Client and Insights Operator to gather and report cluster information for Red Hat analysis and support.
The OpenShift documentation for remote health monitoring explains data collection and includes instructions for opting out. To disable transcript or feedback collection, you must follow the procedure for opting out of remote health monitoring. For more information, see "About remote health monitoring" in the OpenShift Container Platform documentation.
1.9.1. Transcript collection overview Copy linkLink copied to clipboard!
OpenShift Lightspeed periodically transmits chat transcripts to Red Hat using a redaction process that ensures only filtered content is shared or logged.
Transcripts are sent to Red Hat every two hours, by default.Red Hat does not see the original non-redacted content, and the redaction takes place before any content is captured in logs.
OpenShift Lightspeed temporarily logs and stores complete transcripts of conversations that users have with the virtual assistant. This includes the following information:
- Queries from the user.
- The complete message sent to the configured Large Language Model (LLM) provider, which includes system instructions, referenced documentation, and the user question.
- The complete response from the LLM provider.
Transcripts originate from the cluster and are associated with the cluster. Red Hat can assign specific clusters to specific customer accounts. Transcripts do not contain any information about users.
1.9.2. Feedback collection overview Copy linkLink copied to clipboard!
OpenShift Lightspeed collects opt-in user feedback from the virtual assistant interface to analyze response accuracy and improve service quality.
If you submit feedback, the feedback score (thumbs up or down), text feedback (if entered), your query, and the LLM provider response are stored and sent to Red Hat on the same schedule as transcript collection. If you are using the filtering and redaction functionality, the filtered or redacted content is sent to Red Hat. Red Hat will not see the original non-redacted content, and the redaction takes place before any content is captured in logs.
Feedback is associated with the cluster from which it originated, and Red Hat can attribute specific clusters to specific customer accounts. Feedback does not contain any information about which user submitted the feedback, and feedback cannot be tied to any individual user.
1.9.3. Disabling data collection on the OpenShift Lightspeed Service Copy linkLink copied to clipboard!
Disable data collection for Lightspeed by updating the telemetry settings in the OLSConfig custom resource (CR) file settings.
By default, OpenShift Lightspeed collects information about the questions you ask and the feedback you provide on the answers that the Service generates.
Prerequisites
- You have an LLM provider available for use with the OpenShift Lightspeed Service.
- You have installed the OpenShift Lightspeed Operator.
-
You have configured the
OLSConfigCR file, which automatically deploys the OpenShift Lightspeed Service.
Procedure
Open the OpenShift Lightspeed
OLSConfigCR file by running the following command:oc edit olsconfig cluster
$ oc edit olsconfig clusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow Modify the
spec.ols.userDataCollectionfield to disable data collection for the OpenShift Lightspeed CR.Example
OLSConfigCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file.