此内容没有您所选择的语言版本。
Chapter 4. Technology Preview features
This section describes Technology Preview features in Red Hat OpenShift AI 3.0. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
- TrustyAI–Llama Stack integration for safety, guardrails, and evaluation
You can now use the Guardrails Orchestrator from TrustyAI with Llama Stack as a Technology Preview feature.
This integration enables built-in detection and evaluation workflows to support AI safety and content moderation. When TrustyAI is enabled and the FMS Orchestrator and detectors are configured, no manual setup is required.
To activate this feature, set the following field in the
DataScienceClustercustom resource for the OpenShift AI Operator:spec.llamastackoperator.managementState: ManagedFor more information, see the TrustyAI FMS Provider on GitHub: TrustyAI FMS Provider.
- AI Available Assets page for deployed models and MCP servers
A new AI Available Assets page enables AI engineers and application developers to view and consume deployed AI resources within their projects.
This enhancement introduces a filterable UI that lists available models and Model Context Protocol (MCP) servers in the selected project, allowing users with appropriate permissions to identify accessible endpoints and integrate them directly into the AI Playground or other applications.
- Generative AI Playground for model testing and evaluation
The Generative AI (GenAI) Playground introduces a unified, interactive experience within the OpenShift AI dashboard for experimenting with foundation and custom models.
Users can test prompts, compare models, and evaluate Retrieval-Augmented Generation (RAG) workflows by uploading documents and chatting with their content. The GenAI Playground also supports integration with approved Model Context Protocol (MCP) servers and enables export of prompts and agent configurations as runnable code for continued iteration in local IDEs.
Chat context is preserved within each session, providing a suitable environment for prompt engineering and model experimentation.
- Support for air-gapped Llama Stack deployments
You can now install and operate Llama Stack and RAG/Agentic components in fully disconnected (air-gapped) OpenShift AI environments.
This enhancement enables secure deployment of Llama Stack features without internet access, allowing organizations to use AI capabilities while maintaining compliance with strict network security policies.
- Feature Store integration with Workbenches and new user access capabilities
This feature is available as a Technology Preview.
The Feature Store is now integrated with OpenShift AI, data science projects, and workbenches. This integration also introduces centrally managed, role-based access control (RBAC) capabilities for improved governance.
These enhancements provide two key capabilities:
- Feature development within the workbench environment.
Administrator-controlled user access.
This update simplifies and accelerates feature discovery and consumption for data scientists while allowing platform teams to maintain full control over infrastructure and feature access.
- Feature Store user interface
The Feature Store component now includes a web-based user interface (UI).
You can use the UI to view registered Feature Store objects and their relationships, such as features, data sources, entities, and feature services.
To enable the UI, edit your
FeatureStorecustom resource (CR) instance. When you save the change, the Feature Store Operator starts the UI container and creates an OpenShift route for access.For more information, see Setting up the Feature Store user interface for initial use.
- IBM Spyre AI Accelerator model serving support on x86 platforms
- Model serving with the IBM Spyre AI Accelerator is now available as a Technology Preview feature for x86 platforms. The IBM Spyre Operator automates installation and integrates the device plugin, secondary scheduler, and monitoring. For more information, see the IBM Spyre Operator catalog entry.
- Build Generative AI Apps with Llama Stack on OpenShift AI
With this release, the Llama Stack Technology Preview feature enables Retrieval-Augmented Generation (RAG) and agentic workflows for building next-generation generative AI applications. It supports remote inference, built-in embeddings, and vector database operations. It also integrates with providers like TrustyAI’s provider for safety and Trusty AI’s LM-Eval provider for evaluation.
This preview includes tools, components, and guidance for enabling the Llama Stack Operator, interacting with the RAG Tool, and automating PDF ingestion and keyword search capabilities to enhance document discovery.
- Centralized platform observability
Centralized platform observability, including metrics, traces, and built-in alerts, is available as a Technology Preview feature. This solution introduces a dedicated, pre-configured observability stack for OpenShift AI that allows cluster administrators to perform the following actions:
- View platform metrics (Prometheus) and distributed traces (Tempo) for OpenShift AI components and workloads.
- Manage a set of built-in alerts (alertmanager) that cover critical component health and performance issues.
Export platform and workload metrics to external 3rd party observability tools by editing the
DataScienceClusterInitialization(DSCI) custom resource.You can enable this feature by integrating with the Cluster Observability Operator, Red Hat build of OpenTelemetry, and Tempo Operator. For more information, see Monitoring and observability. For more information, see Managing observability.
- Support for Llama Stack Distribution version 0.3.0
The Llama Stack Distribution now includes version 0.3.0 as a Technology Preview feature.
This update introduces several enhancements, including expanded support for retrieval-augmented generation (RAG) pipelines, improved evaluation provider integration, and updated APIs for agent and vector store management. It also provides compatibility updates aligned with recent OpenAI API extensions and infrastructure optimizations for distributed inference.
The previously supported version was 0.2.22.
- Support for Kubernetes Event-driven Autoscaling (KEDA)
OpenShift AI now supports Kubernetes Event-driven Autoscaling (KEDA) in its KServe RawDeployment mode. This Technology Preview feature enables metrics-based autoscaling for inference services, allowing for more efficient management of accelerator resources, reduced operational costs, and improved performance for your inference services.
To set up autoscaling for your inference service in KServe RawDeployment mode, you need to install and configure the OpenShift Custom Metrics Autoscaler (CMA), which is based on KEDA.
For more information about this feature, see: Configuring metrics-based autoscaling.
- LM-Eval model evaluation UI feature
- TrustyAI now offers a user-friendly UI for LM-Eval model evaluations as Technology Preview. This feature allows you to input evaluation parameters for a given model and returns an evaluation-results page, all from the UI.
- Use Guardrails Orchestrator with LlamaStack
You can now run detections using the Guardrails Orchestrator tool from TrustyAI with Llama Stack as a Technology Preview feature, using the built-in detection component. To use this feature, ensure TrustyAI is enabled, the FMS Orchestrator and detectors are set up, and KServe RawDeployment mode is in use for full compatibility if needed. There is no manual set up required. Then, in the
DataScienceClustercustom resource for the Red Hat OpenShift AI Operator, set thespec.llamastackoperator.managementStatefield toManaged.For more information, see Trusty AI FMS Provider on GitHub.
- Support for creating and managing Ray Jobs with the CodeFlare SDK
You can now create and manage Ray Jobs on Ray Clusters directly through the CodeFlare SDK.
This enhancement aligns the CodeFlare SDK workflow with the KubernetesFlow Training Operator (KFTO) model, where a job is created, run, and completed automatically. This enhancement simplifies manual cluster management by preventing Ray Clusters from remaining active after job completion.
- Support for direct authentication with an OIDC identity provider
Direct authentication with an OpenID Connect (OIDC) identity provider is now available as a Technology Preview feature.
This enhancement centralizes OpenShift AI service authentication through the Gateway API, providing a secure, scalable, and manageable authentication model. You can configure the Gateway API with your external OIDC provider by using the
GatewayConfigcustom resource.
- Custom flow estimator for Synthetic Data Generation pipelines
You can now use a custom flow estimator for synthetic data generation (SDG) pipelines.
For supported and compatible tagged SDG teacher models, the estimator helps you evaluate a chosen teacher model, custom flow, and supported hardware on a sample dataset before running full workloads.
- Llama Stack support and optimization for single node OpenShift (SNO)
Llama Stack core can now deploy and run efficiently on single node OpenShift (SNO).
This enhancement optimizes component startup and resource usage so that Llama Stack can operate reliably in single-node cluster environments.
- FAISS vector storage integration
You can now use the FAISS (Facebook AI Similarity Search) library as an inline vector store in OpenShift AI.
FAISS is an open-source framework for high-performance vector search and clustering, optimized for dense numerical embeddings with both CPU and GPU support. When enabled with an embedded SQLite backend in the Llama Stack Distribution, FAISS stores embeddings locally within the container, removing the need for an external vector database service.
- New Feature Store component
You can now install and manage Feature Store as a configurable component in OpenShift AI. Based on the open-source Feast project, Feature Store acts as a bridge between ML models and data, enabling consistent and scalable feature management across the ML lifecycle.
This Technology Preview release introduces the following capabilities:
- Centralized feature repository for consistent feature reuse
- Python SDK and CLI for programmatic and command-line interactions to define, manage, and retrieve features for ML models
- Feature definition and management
- Support for a wide range of data sources
- Data ingestion via feature materialization
- Feature retrieval for both online model inference and offline model training
- Role-Based Access Control (RBAC) to protect sensitive features
- Extensibility and integration with third-party data and compute providers
- Scalability to meet enterprise ML needs
- Searchable feature catalog
Data lineage tracking for enhanced observability
For configuration details, see Configuring Feature Store.
- FIPS support for Llama Stack and RAG deployments
You can now deploy Llama Stack and RAG or agentic solutions in regulated environments that require FIPS compliance.
This enhancement provides FIPS-certified and compatible deployment patterns to help organizations meet strict regulatory and certification requirements for AI workloads.
- Validated sdg-hub notebooks for Red Hat AI Platform
Validated
sdg_hubexample notebooks are now available to provide a notebook-driven user experience in OpenShift AI 3.0.These notebooks support multiple Red Hat platforms and enable customization through SDG pipelines. They include examples for the following use cases:
- Knowledge and skills tuning, including annotated examples for fine-tuning models.
- Synthetic data generation with reasoning traces to customize reasoning models.
- Custom SDG pipelines that demonstrate using default blocks and creating new blocks for specialized workflows.
- RAGAS evaluation provider for Llama Stack (inline and remote)
You can now use the Retrieval-Augmented Generation Assessment (RAGAS) evaluation provider to measure the quality and reliability of RAG systems in OpenShift AI.
RAGAS provides metrics for retrieval quality, answer relevance, and factual consistency, helping you identify issues and optimize RAG pipeline configurations.
The integration with the Llama Stack evaluation API supports two deployment modes:
- Inline provider: Runs RAGAS evaluation directly within the Llama Stack server process.
Remote provider: Runs RAGAS evaluation as distributed jobs using OpenShift AI pipelines.
The RAGAS evaluation provider is now included in the Llama Stack distribution.
- Enable targeted deployment of workbenches to specific worker nodes in Red Hat OpenShift AI Dashboard using node selectors
Hardware profiles are now available as a Technology Preview. The hardware profiles feature enables users to target specific worker nodes for workbenches or model-serving workloads. It allows users to target specific accelerator types or CPU-only nodes.
This feature replaces the current accelerator profiles feature and container size selector field, offering a broader set of capabilities for targeting different hardware configurations. While accelerator profiles, taints, and tolerations provide some capabilities for matching workloads to hardware, they do not ensure that workloads land on specific nodes, especially if some nodes lack the appropriate taints.
The hardware profiles feature supports both accelerator and CPU-only configurations, along with node selectors, to enhance targeting capabilities for specific worker nodes. Administrators can configure hardware profiles in the settings menu. Users can select the enabled profiles using the UI for workbenches, model serving, and AI pipelines where applicable.
- RStudio Server workbench image
With the RStudio Server workbench image, you can access the RStudio IDE, an integrated development environment for R. The R programming language is used for statistical computing and graphics to support data analysis and predictions.
To use the RStudio Server workbench image, you must first build it by creating a secret and triggering the
BuildConfig, and then enable it in the OpenShift AI UI by editing therstudio-rhel9image stream. For more information, see Building the RStudio Server workbench images.ImportantDisclaimer: Red Hat supports managing workbenches in OpenShift AI. However, Red Hat does not provide support for the RStudio software. RStudio Server is available through rstudio.org and is subject to their licensing terms. You should review their licensing terms before you use this sample workbench.
- CUDA - RStudio Server workbench image
With the CUDA - RStudio Server workbench image, you can access the RStudio IDE and NVIDIA CUDA Toolkit. The RStudio IDE is an integrated development environment for the R programming language for statistical computing and graphics. With the NVIDIA CUDA toolkit, you can enhance your work by using GPU-accelerated libraries and optimization tools.
To use the CUDA - RStudio Server workbench image, you must first build it by creating a secret and triggering the
BuildConfig, and then enable it in the OpenShift AI UI by editing therstudio-rhel9image stream. For more information, see Building the RStudio Server workbench images.ImportantDisclaimer: Red Hat supports managing workbenches in OpenShift AI. However, Red Hat does not provide support for the RStudio software. RStudio Server is available through rstudio.org and is subject to their licensing terms. You should review their licensing terms before you use this sample workbench.
The CUDA - RStudio Server workbench image contains NVIDIA CUDA technology. CUDA licensing information is available in the CUDA Toolkit documentation. You should review their licensing terms before you use this sample workbench.
- Support for multinode deployment of very large models
- Serving models over multiple graphical processing unit (GPU) nodes when using a single-model serving runtime is now available as a Technology Preview feature. Deploy your models across multiple GPU nodes to improve efficiency when deploying large models such as large language models (LLMs). For more information, see Deploying models by using multiple GPU nodes.