About


Red Hat OpenShift Lightspeed 1.0

Introduction to OpenShift Lightspeed

Red Hat OpenShift Documentation Team

Abstract

This document provides an overview of OpenShift Lightspeed features.

Chapter 1. About OpenShift Lightspeed

The following topics provide an overview of Red Hat OpenShift Lightspeed and discuss functional requirements.

1.1. OpenShift Lightspeed overview

Red Hat OpenShift Lightspeed is a generative AI-powered virtual assistant for OpenShift Container Platform. Lightspeed functionality uses a natural-language interface in the OpenShift web console to provide answers to questions that you ask about the product.

This early access program exists so that customers can provide feedback on the user experience, features and capabilities, issues encountered, and any other aspects of the product so that Lightspeed can become more aligned with your needs when it is released and made generally available.

1.1.1. About product coverage

Red Hat OpenShift Lightspeed generates responses based on the content from the OpenShift Container Platform product documentation.

1.1.1.1. Product exceptions

The OpenShift Container Platform product documentation does not include information about all products in the Red Hat portfolio. As a result, the Red Hat OpenShift Lightspeed service uses the large language model (LLM) you provide to produce output for the following products or components:

  • Builds for Red Hat OpenShift
  • Red Hat Advanced Cluster Security for Kubernetes
  • Red Hat Advanced Cluster Management for Kubernetes
  • Red Hat CodeReady Workspaces
  • Red Hat OpenShift GitOps
  • Red Hat OpenShift Pipelines
  • Red Hat OpenShift Serverless
  • Red Hat OpenShift Service Mesh 3.x
  • Red Hat Quay

1.2. OpenShift Requirements

OpenShift Lightspeed requires OpenShift Container Platform 4.15 or later running on x86 hardware. Any installation type or deployment architecture is supported so long as the cluster is 4.15+ and x86-based.

Telemetry is enabled on OpenShift Container Platform clusters by default.

  • If the cluster has telemetry enabled, the OpenShift Lightspeed service sends conversations and feedback to Red Hat by default.
  • If the cluster has telemetry disabled, the OpenShift Lightspeed service does not send conversations and feedback to Red Hat.
  • If the cluster has telemetry enabled, and you do not want the OpenShift Lightspeed service to send conversations and feedback to Red Hat, you must disable telemetry.

1.3. Large Language Model (LLM) requirements

A large language model (LLM) is a type of machine learning model that can interpret and generate human-like language. When an LLM is used with a virtual assistant the LLM can interpret questions accurately and provide helpful answers in a conversational manner.

The OpenShift Lightspeed service must have access to an LLM provider. The service does not provide an LLM for you, so the LLM must be configured prior to installing the OpenShift Lightspeed Operator.

The OpenShift Lightspeed service can rely on the following Software as a Service (SaaS) LLM providers:

  • OpenAI
  • Microsoft Azure OpenAI
  • IBM watsonx

If you want to self-host a model, you can use Red Hat OpenShift AI or Red Hat Enterprise Linux AI as your model provider.

1.3.1. IBM watsonx

To use IBM watsonx with Red Hat OpenShift Lightspeed, you need an account with IBM Cloud watsonx. For more information, see the Documentation for IBM watsonx as a Service.

1.3.2. Open AI

To use OpenAI with Red Hat OpenShift Lightspeed, you need access to the OpenAI API platform. For more information, see the OpenAI developer platform documentation.

1.3.3. Microsoft Azure OpenAI

To use Microsoft Azure with Red Hat OpenShift Lightspeed, you need access to Microsoft Azure OpenAI. For more information, see the Azure OpenAI documentation.

1.3.4. Red Hat Enterprise Linux AI

Red Hat Enterprise Linux AI is OpenAI API-compatible, and is configured in a similar manner as the OpenAI provider.

You can configure Red Hat Enterprise Linux AI as the (Large Language Model) LLM provider.

Because the Red Hat Enterprise Linux is in a different environment than the OpenShift Lightspeed deployment, the model deployment must allow access using a secure connection. For more information, see Optional: Allowing access to a model from a secure endpoint.

1.3.5. Red Hat OpenShift AI

Red Hat OpenShift AI is OpenAI API-compatible, and is configured largely the same as the OpenAI provider.

You need a Large Language Model (LLM) deployed on the single model-serving platform of Red Hat OpenShift AI using the Virtual Large Language Model (vLLM) runtime. If the model deployment is in a different OpenShift environment than the OpenShift Lightspeed deployment, the model deployment must include a route to expose it outside the cluster. For more information, see About the single-model serving platform.

1.4. OpenShift Lightspeed FIPS support

Red Hat OpenShift Lightspeed is designed for Federal Information Processing Standards (FIPS).

FIPS is a set of publicly announced standards developed by the National Institute of Standards and Technology (NIST), a part of the U.S. Department of Commerce. The primary purpose of FIPS is to ensure the security and interoperability of computer systems used by U.S. federal government agencies and their associated contractors.

Important

When running on OpenShift Container Platform in FIPS mode, it uses the Red Hat Enterprise Linux cryptographic libraries submitted, or planned to be submitted, to NIST for FIPS validation on only the x86_64, ppc64le, and s390X architectures. For more information about the NIST validation program, see Cryptographic Module Validation Program (NIST). For the latest NIST status of the individual versions of Red Hat Enterprise Linux cryptographic libraries that have been submitted for validation, see Product compliance.

1.5. Supported architecture

OpenShift Lightspeed is only available on the OpenShift Container Platform x86_64 architecture.

1.6. About running OpenShift Lightspeed in disconnected mode

The OpenShift Lightspeed Operator and the OpenShift Lightspeed service can work in a disconnected environment. A disconnected environment is an environment that does not have full access to the internet.

In a disconnected environment, you must mirror the required container images into the environment. For more information, see "Mirroring in disconnected environments" in the OpenShift Container Platform product documentation.

Note

When you mirror the images in a disconnected environment, you must list the OpenShift Lightspeed Operator when you use the oc mirror command.

1.7. About data use

Red Hat OpenShift Lightspeed is a virtual assistant you interact with using natural language. Using the OpenShift Lightspeed interface, you send chat messages that OpenShift Lightspeed transforms and sends to the Large Language Model (LLM) provider you have configured for your environment. These messages can contain information about your cluster, cluster resources, or other aspects of your environment.

The OpenShift Lightspeed service has limited capabilities to filter or redact the information you provide to the LLM. Do not enter information into the OpenShift Lightspeed interface that you do not want to send to the LLM provider.

By sending transcripts or feedback to Red Hat you agree that Red Hat can use the data for quality assurance purposes. The transcript recording data uses the back-end of the Red Hat Insights system, and is subject to the same access restrictions and other security policies.

You can email Red Hat and request that your data be deleted.

1.8. About data, telemetry, transcript, and feedback collection

OpenShift Lightspeed is a virtual assistant that you interact with using natural language. Communicating with OpenShift Lightspeed involves sending chat messages, which may include information about your cluster, your cluster resources, or other aspects of your environment. These messages are sent to OpenShift Lightspeed, potentially with some content filtered or redacted, and then sent to the LLM provider that you have configured.

Do not enter any information into the OpenShift Lightspeed user interface that you do not want sent to the LLM provider.

The transcript recording data uses the Red Hat Insights system back-end and is subject to the same access restrictions and other security policies described in Red Hat Insights data and application security.

1.9. Remote health monitoring overview

Red Hat products record basic information by using the Telemeter Client and the Insights Operator, which is generally referred to as Remote Health Monitoring in OpenShift clusters. The OpenShift documentation for remote health monitoring explains data collection and includes instructions for opting out. To disable transcript or feedback collection, you must follow the procedure for opting out of remote health monitoring. For more information, see "About remote health monitoring" in the OpenShift Container Platform documentation.

1.9.1. Transcript collection overview

Transcripts are sent to Red Hat every two hours, by default. If you are using the filtering and redaction functionality, the filtered or redacted content is sent to Red Hat. Red Hat does not see the original non-redacted content, and the redaction takes place before any content is captured in logs.

OpenShift Lightspeed temporarily logs and stores complete transcripts of conversations that users have with the virtual assistant. This includes the following information:

  • Queries from the user.
  • The complete message sent to the configured Large Language Model (LLM) provider, which includes system instructions, referenced documentation, and the user question.
  • The complete response from the LLM provider.

Transcripts originate from the cluster and are associated with the cluster. Red Hat can assign specific clusters to specific customer accounts. Transcripts do not contain any information about users.

1.9.2. Feedback collection overview

OpenShift Lightspeed collects feedback from users who engage with the feedback feature in the virtual assistant interface. If a user submits feedback, the feedback score (thumbs up or down), text feedback (if entered), the user query, and the LLM provider response are stored and sent to Red Hat on the same schedule as transcript collection. If you are using the filtering and redaction functionality, the filtered or redacted content is sent to Red Hat. Red Hat will not see the original non-redacted content, and the redaction takes place before any content is captured in logs.

Feedback is associated with the cluster from which it originated, and Red Hat can attribute specific clusters to specific customer accounts. Feedback does not contain any information about which user submitted the feedback, and feedback cannot be tied to any individual user.

1.10. Additional resources

Legal Notice

Copyright © 2025 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat