Questo contenuto non è disponibile nella lingua selezionata.
Chapter 1. About OpenShift Lightspeed
The following topics provide an overview of {ols-official} and discuss functional requirements.
1.1. OpenShift Lightspeed Overview
Red Hat OpenShift Lightspeed is a generative AI-powered virtual assistant for OpenShift Container Platform. Lightspeed functionality uses a natural-language interface in the OpenShift web console.
This early access program exists so that customers can provide feedback on the user experience, features and capabilities, issues encountered, and any other aspects of the product so that Lightspeed can become more aligned with your needs when it is released and made generally available.
1.2. OpenShift Requirements
OpenShift Lightspeed requires OpenShift Container Platform 4.15 or later running on x86 hardware. Any installation type or deployment architecture is supported so long as the cluster is 4.15+ and x86-based.
For the OpenShift Lightspeed Technology Preview release, the cluster you use must be connected to the Internet and it must have telemetry enabled. Telemetry is enabled by default. If you are using a standard installation process for OpenShift confirm that it does not disable telemetry.
1.3. Large Language Model (LLM) requirements
As part of the Technology Preview release, OpenShift Lightspeed can rely on the following Software as a Service (SaaS) Large Language Model (LLM) providers:
- OpenAI
- Microsoft Azure OpenAI
- IBM WatsonX
Many self-hosted or self-managed model servers claim API compatibility with OpenAI. It is possible to configure the OpenShift Lightspeed OpenAI provider to point to an API-compatible model server. If the model server is truly API-compatible, especially with respect to authentication, then it may work. These configurations have not been tested by Red Hat, and issues related to their use are outside the scope of Technology Preview support.
For OpenShift Lightspeed configurations with Red Hat OpenShift AI, you must host your own LLM provider.
1.3.1. About OpenAI
To use OpenAI with Red Hat OpenShift Lightspeed, you will need access to the OpenAI API platform.
1.3.2. About Azure OpenAI
To use Microsoft Azure with Red Hat OpenShift Lightspeed, you must have access to Microsoft Azure OpenAI.
1.3.3. About WatsonX
To use IBM WatsonX with Red Hat OpenShift Lightspeed, you will need an account with IBM Cloud’s WatsonX.
1.3.4. About Red Hat Enterprise Linux AI
Red Hat Enterprise Linux AI is OpenAI API-compatible, and is configured in a similar manner as the OpenAI provider.
You can configure Red Hat Enterprise Linux AI as the (Large Language Model) LLM provider.
Because the Red Hat Enterprise Linux is in a different environment than the OpenShift Lightspeed deployment, the model deployment must allow access using a secure connection. For more information, see Optional: Allowing access to a model from a secure endpoint.
1.3.5. About Red Hat OpenShift AI
Red Hat OpenShift AI is OpenAI API-compatible, and is configured largely the same as the OpenAI provider.
You need a Large Language Model (LLM) deployed on the single model-serving platform of Red Hat OpenShift AI using the Virtual Large Language Model (vLLM) runtime. If the model deployment is in a different OpenShift environment than the OpenShift Lightspeed deployment, the model deployment must include a route to expose it outside the cluster. For more information, see About the single-model serving platform.
1.4. About data use
Red Hat OpenShift Lightspeed is a virtual assistant you interact with using natural language. Using the OpenShift Lightspeed interface, you send chat messages that OpenShift Lightspeed transforms and sends to the Large Language Model (LLM) provider you have configured for your environment. These messages can contain information about your cluster, cluster resources, or other aspects of your environment.
The OpenShift Lightspeed Technology Preview release has limited capabilities to filter or redact the information you provide to the LLM. Do not enter information into the OpenShift Lightspeed interface that you do not want to send to the LLM provider.
By using the OpenShift Lightspeed as part of the Technology Preview release, you agree that Red Hat may use all of the messages that you exchange with the LLM provider for any purpose. The transcript recording data uses the Red Hat Insights system’s back-end, and is subject to the same access restrictions and other security policies.
You may email Red Hat and request that your data be deleted at the end of the Technology Preview release period.
1.5. About data, telemetry, transcript, and feedback collection
OpenShift Lightspeed is a virtual assistant that you interact with using natural language. Communicating with OpenShift Lightspeed involves sending chat messages, which may include information about your cluster, your cluster resources, or other aspects of your environment. These messages are sent to OpenShift Lightspeed, potentially with some content filtered or redacted, and then sent to the LLM provider that you have configured.
Do not enter any information into the OpenShift Lightspeed user interface that you do not want sent to the LLM provider.
The transcript recording data uses the Red Hat Insights system back-end and is subject to the same access restrictions and other security policies described in Red Hat Insights data and application security.
1.6. About remote health monitoring
Red Hat records basic information using the Telemeter Client and the Insights Operator, which is generally referred to as Remote Health Monitoring in OpenShift clusters. The OpenShift documentation for remote health monitoring explains data collection and includes instructions for opting out. If you wish to disable transcript or feedback collection, you must follow the procedure for opting out of remote health monitoring. For more information, see "About remote health monitoring" in the OpenShift Container Platform documentation.
1.6.1. Transcript collection overview
Transcripts are sent to Red Hat every two hours, by default. If you are using the filtering and redaction functionality, the filtered or redacted content is sent to Red Hat. Red Hat does not see the original non-redacted content, and the redaction takes place before any content is captured in logs.
OpenShift Lightspeed temporarily logs and stores complete transcripts of conversations that users have with the virtual assistant. This includes the following information:
- Queries from the user.
- The complete message sent to the configured Large Language Model (LLM) provider, which includes system instructions, referenced documentation, and the user question.
- The complete response from the LLM provider.
Transcripts originate from the cluster and are associated with the cluster. Red Hat can assign specific clusters to specific customer accounts. Transcripts do not contain any information about users.
1.6.2. Feedback collection overview
OpenShift Lightspeed collects feedback from users who engage with the feedback feature in the virtual assistant interface. If a user submits feedback, the feedback score (thumbs up or down), text feedback (if entered), the user query, and the LLM provider response are stored and sent to Red Hat on the same schedule as transcript collection. If you are using the filtering and redaction functionality, the filtered or redacted content is sent to Red Hat. Red Hat will not see the original non-redacted content, and the redaction takes place before any content is captured in logs.
Feedback is associated with the cluster from which it originated, and Red Hat can attribute specific clusters to specific customer accounts. Feedback does not contain any information about which user submitted the feedback, and feedback cannot be tied to any individual user.