Questo contenuto non è disponibile nella lingua selezionata.

Chapter 1. About OpenShift Lightspeed


Red Hat OpenShift Lightspeed is a generative AI service that helps developers and administrators solve problems by providing context-aware recommendations for OpenShift Container Platform.

1.1. OpenShift Lightspeed overview

Use Red Hat OpenShift Lightspeed to fix and manage your OpenShift clusters. You can operate the virtual assistant by using plain English right inside the OpenShift web console.

1.1.1. About product coverage

Red Hat OpenShift Lightspeed answers your questions by using information from official OpenShift Container Platform documentation.

1.1.1.1. Product exceptions

The OpenShift Container Platform documentation does not cover every Red Hat product. Because of this, OpenShift Lightspeed uses your large language model (LLM) to create answers for these products:

  • Builds for Red Hat OpenShift
  • Red Hat Advanced Cluster Security for Kubernetes
  • Red Hat Advanced Cluster Management for Kubernetes
  • Red Hat CodeReady Workspaces
  • Red Hat OpenShift GitOps
  • Red Hat OpenShift Pipelines
  • Red Hat OpenShift Serverless
  • Red Hat OpenShift Service Mesh 3.x
  • Red Hat Quay

1.2. OpenShift requirements

Hardware and software requirements for OpenShift Lightspeed, including supported OpenShift Container Platform versions and CPU architectures.

OpenShift Container Platform clusters enable telemetry by default.

  • When telemetry is on, OpenShift Lightspeed sends your chats and feedback to Red Hat.
  • When telemetry is off, OpenShift Lightspeed does not send this data.
  • To stop OpenShift Lightspeed from sending your chats and feedback, you must disable telemetry for the whole cluster.

1.2.1. Cluster resource requirements

Ensure that OpenShift Lightspeed has enough CPU, memory, and storage allocations to support Service performance and cluster stability without impacting other cluster workloads.

Expand
ComponentMinimum CPU (Cores)Minimum MemoryMaximum Memory

Application server

0.5

1 GB

4 Gi

PostgreSQL database

0.3

300 Mi

2 Gi

OpenShift Container Platform web console

0.1

50 Mi

100 Mi

OpenShift Lightspeed operator

0.1

64 Mi

256 Mi

1.3. Large language model (LLM) requirements

OpenShift Lightspeed supports Software as a Service (SaaS) and self-hosted large language model (LLM) providers that meet defined authentication requirements.

The LLM is a type of machine learning model that interprets and generates human-like language. When you use the LLM with a virtual assistant, the LLM can accurately interpret questions and offers helpful answers in a conversational manner. The OpenShift Lightspeed Service must have access to the LLM provider.

The Service does not provide the LLM for you, so you must configure the LLM before installing the OpenShift Lightspeed Operator.

Note

Red Hat does not provide support for any specific models or make suggestions or support statements pertaining to models.

The OpenShift Lightspeed Service can rely on the following SaaS LLM providers:

  • OpenAI
  • Microsoft Azure OpenAI
  • IBM watsonx

If you want to self-host a model, you can use Red Hat OpenShift AI or Red Hat Enterprise Linux AI as your model provider.

1.3.1. IBM watsonx

To use IBM watsonx with Red Hat OpenShift Lightspeed, you need an account with IBM Cloud watsonx. For more information, see the Documentation for IBM watsonx as a Service.

1.3.2. Open AI

To use OpenAI with Red Hat OpenShift Lightspeed, you need access to the OpenAI API platform. For more information, see the OpenAI developer platform documentation.

1.3.3. Microsoft Azure OpenAI

To use Microsoft Azure with Red Hat OpenShift Lightspeed, you need access to Microsoft Azure OpenAI. For more information, see the Azure OpenAI documentation.

1.3.4. Red Hat Enterprise Linux AI

Red Hat Enterprise Linux AI is OpenAI API-compatible, and you configure Red Hat Enterprise Linux AI in a similar manner as the OpenAI provider.

You can configure Red Hat Enterprise Linux AI as the LLM provider.

Because the Red Hat Enterprise Linux is in a different environment than the OpenShift Lightspeed deployment, the model deployment must allow access by using a secure connection. For more information, see Optional: Allowing access to a model from a secure endpoint.

OpenShift Lightspeed version 1.0 and later supports vLLM Server version 0.8.4 and later. When self-hosting the LLM with Red Hat Enterprise Linux AI, you can use vLLM Server as the inference engine.

1.3.5. Red Hat OpenShift AI

Red Hat OpenShift AI is OpenAI API-compatible, and you configure Red Hat OpenShift AI largely the same as the OpenAI provider.

You must deploy the LLM on the Red Hat OpenShift AI single-model serving platform that uses the virtual large language model (vLLM) runtime. If the model deployment runs in a different OpenShift environment than the OpenShift Lightspeed deployment, include a route to expose the model deployment outside the cluster. For more information, see About the single-model serving platform.

OpenShift Lightspeed version 1.0 and later supports vLLM Server version 0.8.4 and later. When self-hosting the LLM with Red Hat OpenShift AI, you can use vLLM Server as the inference engine.

Red Hat OpenShift Lightspeed supports Federal Information Processing Standards (FIPS). You can run Red Hat OpenShift Lightspeed on OpenShift clusters that use FIPS mode.

FIPS is a set of publicly announced standards developed by the National Institute of Standards and Technology (NIST), a part of the U.S. Department of Commerce. The primary purpose of FIPS is to ensure the security and interoperability of computer systems used by U.S. federal government agencies and their associated contractors.

Important

When running on OpenShift Container Platform in FIPS mode, it uses the Red Hat Enterprise Linux cryptographic libraries submitted, or planned to be submitted, to NIST for FIPS validation on only the x86_64, ppc64le, and s390X architectures. For more information about the NIST validation program, see Cryptographic Module Validation Program (NIST). For the latest NIST status of the individual versions of Red Hat Enterprise Linux cryptographic libraries that have been submitted for validation, see Product compliance.

1.5. Supported architecture

OpenShift Lightspeed works with OpenShift Container Platform clusters that use the x86_64 architecture.

1.6. About running OpenShift Lightspeed in disconnected mode

OpenShift Lightspeed works in disconnected clusters without full internet access.

In a disconnected cluster, you must mirror the container images you need. For more help, see "Mirroring in disconnected environments" in the OpenShift Container Platform documentation.

Note

When you mirror images in a disconnected cluster, list the OpenShift Lightspeed Operator with the oc mirror command.

1.7. About data use

OpenShift Lightspeed adds cluster and environment details to your messages. Then, it sends this data to the large language model (LLM) to get an answer.

OpenShift Lightspeed has limited ability to filter or hide the data you send to the LLM. Do not enter any information into the interface that you want to keep private from the LLM.

When you send transcripts or feedback to Red Hat, you agree that Red Hat can use the data to improve our Service. The transcript recording data uses the Red Hat Insights system. It follows the same security rules and access limits as that system.

You can email Red Hat and ask us to delete your data.

1.8. About data, telemetry, transcript, and feedback collection

OpenShift Lightspeed sends your messages and cluster data through a redaction layer. It does this to clean the data before it goes to the LLM.

Do not enter anything into the OpenShift Lightspeed interface that you want to keep private from the LLM.

The transcript recording data uses the Red Hat Insights system. It follows the same security rules and access limits as that system. You can learn more in the Red Hat Insights security guide.

1.9. Remote health monitoring overview

Remote Health Monitoring uses the Telemeter Client and Insights Operator to gather and report cluster information for Red Hat analysis and support.

You can learn how Red Hat collects data in the OpenShift Container Platform documentation. To stop sending chat transcripts or feedback, you must opt out of remote health monitoring. Follow the steps in the "About remote health monitoring" section of the OpenShift Container Platform documentation.

1.9.1. Transcript collection overview

OpenShift Lightspeed sends chat transcripts to Red Hat on a set schedule. The Service uses a redaction layer to filter data before the Service shares or logs it.

By default, OpenShift Lightspeed sends these transcripts every two hours. Red Hat cannot see your original data. OpenShift Lightspeed hides sensitive data before it reaches any logs.

OpenShift Lightspeed saves conversation transcripts for a short time. This includes:

  • Queries from the user.
  • The complete message sent to the configured large language model (LLM) provider, which includes system instructions, referenced documentation, and the user question.
  • The complete response from the LLM provider.

Transcripts come from your cluster and stay linked to it. Red Hat can match these clusters to specific customer accounts. These transcripts do not contain any user data.

1.9.2. Feedback collection overview

OpenShift Lightspeed collects opt-in user feedback from the virtual assistant interface to analyze response accuracy and improve Service quality.

If you submit feedback, Red Hat stores and receives your feedback score, text, and query. Red Hat also receives the large language model (LLM) response on the same schedule as transcripts. When you use the redaction tools, Red Hat receives only the filtered data. Red Hat does not see the original data. OpenShift Lightspeed hides your data before the system logs it.

Your feedback stays associated with the cluster where it began. Red Hat can match these clusters to specific customer accounts. This feedback does not contain any user details, and Red Hat cannot link the feedback to any specific person.

1.9.3. Disabling data collection on the OpenShift Lightspeed Service

Disable data collection for Lightspeed by updating the telemetry settings in the OLSConfig custom resource (CR) file settings.

By default, OpenShift Lightspeed collects information about the questions you ask and the feedback you offer on the answers that the Service generates.

Prerequisites

  • You have a large language model (LLM) provider available for use with the OpenShift Lightspeed Service.
  • You have installed the OpenShift Lightspeed Operator.
  • You have configured the OLSConfig CR file, which automatically deploys the OpenShift Lightspeed Service.

Procedure

  1. Open the OpenShift Lightspeed OLSConfig CR file by running the following command:

    $ oc edit olsconfig cluster
    Copy to Clipboard Toggle word wrap
  2. Change the spec.ols.userDataCollection field to disable data collection for the OpenShift Lightspeed CR.

    apiVersion: ols.openshift.io/v1alpha1
    kind: OLSConfig
    metadata:
      name: cluster
    spec:
      ols:
        userDataCollection:
          feedbackDisabled: true
          transcriptsDisabled: true
    Copy to Clipboard Toggle word wrap
    • spec.ols.userDataCollection.feedbackDisabled specifies if the Service collects your feedback.
    • spec.ols.userDataCollection.transcriptsDisabled specifies if the Service collects your chat log transcripts.
  3. Save the file.

1.10. Additional resources

Red Hat logoGithubredditYoutubeTwitter

Formazione

Prova, acquista e vendi

Community

Informazioni sulla documentazione di Red Hat

Aiutiamo gli utenti Red Hat a innovarsi e raggiungere i propri obiettivi con i nostri prodotti e servizi grazie a contenuti di cui possono fidarsi. Esplora i nostri ultimi aggiornamenti.

Rendiamo l’open source più inclusivo

Red Hat si impegna a sostituire il linguaggio problematico nel codice, nella documentazione e nelle proprietà web. Per maggiori dettagli, visita il Blog di Red Hat.

Informazioni su Red Hat

Forniamo soluzioni consolidate che rendono più semplice per le aziende lavorare su piattaforme e ambienti diversi, dal datacenter centrale all'edge della rete.

Theme

© 2026 Red Hat
Torna in cima