이 콘텐츠는 선택한 언어로 제공되지 않습니다.

Chapter 4. Technology Preview features


Important

This section describes Technology Preview features in Red Hat OpenShift AI 3.3. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenAI-compatible annotations for search and responses in Llama Stack

Starting with OpenShift AI 3.3, Llama Stack provides OpenAI-compatible grounding and citation annotations for search-backed responses as a Technology Preview feature.

This enhancement enables retrieval-augmented generation (RAG) applications to trace generated responses back to source documents by using the same annotation schemas returned by OpenAI Search and Responses APIs. The feature supports document source attribution and preserves citation metadata in API responses, allowing existing OpenAI client applications to consume citation information without code changes.

This capability improves transparency, auditability, and explainability for enterprise RAG workloads, and serves as a foundation for future advanced tracing and observability features in Llama Stack. For more information, see OpenAI API annotations for search and responses.

The Llama Stack Operator available on multi-architecture clusters
The Llama Stack Operator is now deployable on multi-architecture clusters in OpenShift AI version 3.3 and is available by default.
Llama Stack versions in OpenShift AI 3.3
OpenShift AI 3.3.0 includes Open Data Hub Llama Stack version 0.4.2.1+rhai0, which is based on upstream Llama Stack version 0.4.2.
The Llama Stack Operator with ConfigMap driven image updates

The Llama Stack Operator in OpenShift AI 3.3 now offers ConfigMap driven image updates for LlamaStackDistribution resources. This allows you to patch security or bug fixes without new operator versions. To enable this feature, update your ConfigMap with the following parameters:

 image-overrides: |
    starter-gpu: registry.redhat.io/rhoai/odh-llama-stack-core-rhel9:v3.3
    starter: registry.redhat.io/rhoai/odh-llama-stack-core-rhel9:v3.3
Copy to Clipboard Toggle word wrap

Using the starter-gpu and starter distributions names as the key allows the operator to apply these overrides automatically.

To update the Llama Stack Distributions image for all starter distributions, run the following command:

$ kubectl patch configmap llama-stack-operator-config -n llama-stack-k8s-operator-system --type merge -p '{"data":{"image-overrides":"starter: quay.io/opendatahub/llama-stack:latest"}}'
Copy to Clipboard Toggle word wrap

This allows the LlamaStackDistribution resources to restart with the new image.

Model-as-a-Service (MaaS) integration

This feature is available as a Technology Preview.

OpenShift AI now includes Model-as-a-Service (MaaS) to address resource consumption and governance challenges associated with serving large language models (LLMs).

MaaS provides centralized control over model access and resource usage by exposing models through managed API endpoints, allowing administrators to enforce consumption policies across teams.

This Technology Preview introduces the following capabilities:

MLServer ServingRuntime for KServe

The MLServer serving runtime for KServe is now available as a technology preview feature in Red Hat OpenShift AI. You can use this runtime to deploy models trained on structured data, such as classical machine learning models. You can deploy models directly without converting them to ONNX format, which simplifies the deployment process and improves performance.

This feature provides support for the following common machine learning frameworks:

pgvector support as a remote vector store provider in Llama Stack

Starting with OpenShift AI 3.2, you can use PostgreSQL with the pgvector extension as a remote vector store provider for the Llama Stack vector_store endpoint as a Technology Preview feature.

This enhancement enables vector storage backed by PostgreSQL, providing durable and transactional persistence for vector embeddings. For more information, see Llama Stack API provider support and Deploying a PostgreSQL instance with pgvector.

Llama Stack versions in OpenShift AI 3.2
OpenShift AI 3.2.0 uses the Open Data Hub Llama Stack version 0.3.5+rhai0 in the Llama Stack Distribution, which is based on the upstream Llama Stack version 0.3.5.
Llama Stack servers now require installation of the PostgreSQL Operator
In OpenShift AI 3.2, the PostgreSQL Operator is now required to deploy a Llama Stack server. For more information, see the Deploying a Llama Stack server documentation.
Enabling high availability on Llama Stack
Llama Stack servers can be configured to remain operational in the event of a single point of failure as a Technology Preview feature. You can enable PostgreSQL high-availability settings in your LlamaStackDistribution custom resource. For more information, see the Enabling high availability on Llama Stack (Optional) documentation.
Custom embeddings on Llama Stack

OpenShift AI 3.2 allows you to customize your embedding models as a Technology Preview feature. In the version of Llama Stack shipped in OpenShift AI 3.2, vLLM controls embeddings by default. You can update the VLLM_EMBEDDING_URL environment variable in your LlamaStackDistribution custom resource to enable embeddings, or you can use custom embeddings providers. For example:

  - name: ENABLE_SENTENCE_TRANSFORMERS
    value: "true"
  - name: EMBEDDING_PROVIDER
    value: "sentence-transformers"
Copy to Clipboard Toggle word wrap
NVIDIA NeMo Guardrails
You can use NVIDIA NeMo Guardrails as a Technology Preview feature to add guardrails and safety controls to your deployed models in Red Hat OpenShift AI. NeMo Guardrails provides a framework for controlling conversations with large language models, enabling you to define a variety of rails, such as sensitive data detection, content filtering, or custom validation rules.
Stop button for chatbot in Generative AI Studio
You can interrupt the chatbot as it is composing a response to a prompt. In the Playground, after you send a prompt, the Send button in the chat input field changes to a Stop button. Click it if you want to interrupt the model’s response, for example, when the response takes longer than you anticipated or if you notice that you made an error in your prompt. The chatbot posts "You stopped this message" to confirm your stop request.
Kubeflow Trainer v2

Kubeflow Trainer v2 is now available as a Technology Preview feature in OpenShift AI 3.2.

Kubeflow Trainer v2 is the next generation of distributed training for OpenShift AI, replacing the Kubeflow Training Operator v1 (KFTOv1). This Kubernetes-native solution simplifies how data scientists and ML engineers run PyTorch training workloads at scale using a unified TrainJob API and Python SDK.

This Technology Preview release introduces the following capabilities:

  • Simplified job definitions using TrainJob and TrainingRuntime resources
  • Python SDK for programmatic job creation and management
  • A new web-based user interface for inspecting and interacting with training jobs
  • Real-time progress tracking with visibility into training steps, epochs, and metrics
  • Smart checkpoint management with automatic preservation during pod preemption or termination
  • Pausing and resuming train jobs
  • Resource-aware scheduling via native integration with Red Hat build of Kueue

    Users of the deprecated Kubeflow Training Operator v1 (KFTOv1) should migrate their workloads to Kubeflow Trainer v2 before KFTOv1 is removed. For guidance and more details, see the migration guide.

    For more information about Kubeflow Trainer v2 features and usage, see the Kubeflow Trainer v2 documentation.

TrustyAI–Llama Stack integration for safety, guardrails, and evaluation

You can now use the Guardrails Orchestrator from TrustyAI with Llama Stack as a Technology Preview feature.

This integration enables built-in detection and evaluation workflows to support AI safety and content moderation. When TrustyAI is enabled and the FMS Orchestrator and detectors are configured, no manual setup is required.

To activate this feature, set the following field in the DataScienceCluster custom resource for the OpenShift AI Operator: spec.llamastackoperator.managementState: Managed

For more information, see the TrustyAI FMS Provider on GitHub: TrustyAI FMS Provider.

AI Available Assets page for deployed models and MCP servers

A new AI Available Assets page enables AI engineers and application developers to view and consume deployed AI resources within their projects.

This enhancement introduces a filterable UI that lists available models and Model Context Protocol (MCP) servers in the selected project, allowing users with appropriate permissions to identify accessible endpoints and integrate them directly into the AI Playground or other applications.

Generative AI Playground for model testing and evaluation

The Generative AI (GenAI) Playground introduces a unified, interactive experience within the OpenShift AI dashboard for experimenting with foundation and custom models.

Users can test prompts, compare models, and evaluate Retrieval-Augmented Generation (RAG) workflows by uploading documents and chatting with their content. The GenAI Playground also supports integration with approved Model Context Protocol (MCP) servers and enables export of prompts and agent configurations as runnable code for continued iteration in local IDEs.

Chat context is preserved within each session, providing a suitable environment for prompt engineering and model experimentation.

Support for air-gapped Llama Stack deployments

You can now install and operate Llama Stack and RAG/Agentic components in fully disconnected (air-gapped) OpenShift AI environments.

This enhancement enables secure deployment of Llama Stack features without internet access, allowing organizations to use AI capabilities while maintaining compliance with strict network security policies.

Feature Store integration with Workbenches and new user access capabilities

This feature is available as a Technology Preview.

The Feature Store is now integrated with OpenShift AI, data science projects, and workbenches. This integration also introduces centrally managed, role-based access control (RBAC) capabilities for improved governance.

These enhancements provide two key capabilities:

  • Feature development within the workbench environment.
  • Administrator-controlled user access.

    This update simplifies and accelerates feature discovery and consumption for data scientists while allowing platform teams to maintain full control over infrastructure and feature access.

Feature Store user interface

The Feature Store component now includes a web-based user interface (UI).

You can use the UI to view registered Feature Store objects and their relationships, such as features, data sources, entities, and feature services.

To enable the UI, edit your FeatureStore custom resource (CR) instance. When you save the change, the Feature Store Operator starts the UI container and creates an OpenShift route for access.

For more information, see Setting up the Feature Store user interface for initial use.

IBM Spyre AI Accelerator model serving support on x86 platforms
Model serving with the IBM Spyre AI Accelerator is now available as a Technology Preview feature for x86 platforms. The IBM Spyre Operator automates installation and integrates the device plugin, secondary scheduler, and monitoring. For more information, see the IBM Spyre Operator catalog entry.
Build Generative AI Apps with Llama Stack on OpenShift AI

With this release, the Llama Stack Technology Preview feature enables Retrieval-Augmented Generation (RAG) and agentic workflows for building next-generation generative AI applications. It supports remote inference, built-in embeddings, and vector database operations. It also integrates with providers like TrustyAI’s provider for safety and Trusty AI’s LM-Eval provider for evaluation.

This preview includes tools, components, and guidance for enabling the Llama Stack Operator, interacting with the RAG Tool, and automating PDF ingestion and keyword search capabilities to enhance document discovery.

Centralized platform observability

Centralized platform observability, including metrics, traces, and built-in alerts, is available as a Technology Preview feature. This solution introduces a dedicated, pre-configured observability stack for OpenShift AI that allows cluster administrators to perform the following actions:

  • View platform metrics (Prometheus) and distributed traces (Tempo) for OpenShift AI components and workloads.
  • Manage a set of built-in alerts (alertmanager) that cover critical component health and performance issues.
  • Export platform and workload metrics to external 3rd party observability tools by editing the DataScienceClusterInitialization (DSCI) custom resource.

    You can enable this feature by integrating with the Cluster Observability Operator, Red Hat build of OpenTelemetry, and Tempo Operator. For more information, see Monitoring and observability. For more information, see Managing observability.

Support for Llama Stack Distribution version 0.3.0

The Llama Stack Distribution now includes version 0.3.0 as a Technology Preview feature.

This update introduces several enhancements, including expanded support for retrieval-augmented generation (RAG) pipelines, improved evaluation provider integration, and updated APIs for agent and vector store management. It also provides compatibility updates aligned with recent OpenAI API extensions and infrastructure optimizations for distributed inference.

The previously supported version was 0.2.22.

Support for Kubernetes Event-driven Autoscaling (KEDA)

OpenShift AI now supports Kubernetes Event-driven Autoscaling (KEDA) in its KServe RawDeployment mode. This Technology Preview feature enables metrics-based autoscaling for inference services, allowing for more efficient management of accelerator resources, reduced operational costs, and improved performance for your inference services.

To set up autoscaling for your inference service in KServe RawDeployment mode, you need to install and configure the OpenShift Custom Metrics Autoscaler (CMA), which is based on KEDA.

For more information about this feature, see: Configuring metrics-based autoscaling.

LM-Eval model evaluation UI feature
TrustyAI now offers a user-friendly UI for LM-Eval model evaluations as Technology Preview. This feature allows you to input evaluation parameters for a given model and returns an evaluation-results page, all from the UI.
Support for creating and managing Ray Jobs with the CodeFlare SDK

You can now create and manage Ray Jobs on Ray Clusters directly through the CodeFlare SDK.

This enhancement aligns the CodeFlare SDK workflow with the KubernetesFlow Training Operator (KFTO) model, where a job is created, run, and completed automatically. This enhancement simplifies manual cluster management by preventing Ray Clusters from remaining active after job completion.

Support for direct authentication with an OIDC identity provider

Direct authentication with an OpenID Connect (OIDC) identity provider is now available as a Technology Preview feature.

This enhancement centralizes OpenShift AI service authentication through the Gateway API, providing a secure, scalable, and manageable authentication model. You can configure the Gateway API with your external OIDC provider by using the GatewayConfig custom resource.

Custom flow estimator for Synthetic Data Generation pipelines

You can now use a custom flow estimator for synthetic data generation (SDG) pipelines.

For supported and compatible tagged SDG teacher models, the estimator helps you evaluate a chosen teacher model, custom flow, and supported hardware on a sample dataset before running full workloads.

Llama Stack support and optimization for single node OpenShift (SNO)

Llama Stack core can now deploy and run efficiently on single node OpenShift (SNO).

This enhancement optimizes component startup and resource usage so that Llama Stack can operate reliably in single-node cluster environments.

FAISS vector storage integration

You can now use the FAISS (Facebook AI Similarity Search) library as an inline vector store in OpenShift AI.

FAISS is an open-source framework for high-performance vector search and clustering, optimized for dense numerical embeddings with both CPU and GPU support. When enabled with an embedded SQLite backend in the Llama Stack Distribution, FAISS stores embeddings locally within the container, removing the need for an external vector database service.

New Feature Store component

You can now install and manage Feature Store as a configurable component in OpenShift AI. Based on the open-source Feast project, Feature Store acts as a bridge between ML models and data, enabling consistent and scalable feature management across the ML lifecycle.

This Technology Preview release introduces the following capabilities:

  • Centralized feature repository for consistent feature reuse
  • Python SDK and CLI for programmatic and command-line interactions to define, manage, and retrieve features for ML models
  • Feature definition and management
  • Support for a wide range of data sources
  • Data ingestion via feature materialization
  • Feature retrieval for both online model inference and offline model training
  • Role-Based Access Control (RBAC) to protect sensitive features
  • Extensibility and integration with third-party data and compute providers
  • Scalability to meet enterprise ML needs
  • Searchable feature catalog
  • Data lineage tracking for enhanced observability

    For configuration details, see Configuring Feature Store.

FIPS support for Llama Stack and RAG deployments

You can now deploy Llama Stack and RAG or agentic solutions in regulated environments that require FIPS compliance.

This enhancement provides FIPS-certified and compatible deployment patterns to help organizations meet strict regulatory and certification requirements for AI workloads.

Validated sdg-hub notebooks for Red Hat AI Platform

Validated sdg_hub example notebooks are now available to provide a notebook-driven user experience in OpenShift AI 3.0.

These notebooks support multiple Red Hat platforms and enable customization through SDG pipelines. They include examples for the following use cases:

  • Knowledge and skills tuning, including annotated examples for fine-tuning models.
  • Synthetic data generation with reasoning traces to customize reasoning models.
  • Custom SDG pipelines that demonstrate using default blocks and creating new blocks for specialized workflows.
RAGAS evaluation provider for Llama Stack (inline and remote)

You can now use the Retrieval-Augmented Generation Assessment (RAGAS) evaluation provider to measure the quality and reliability of RAG systems in OpenShift AI.

RAGAS provides metrics for retrieval quality, answer relevance, and factual consistency, helping you identify issues and optimize RAG pipeline configurations.

The integration with the Llama Stack evaluation API supports two deployment modes:

  • Inline provider: Runs RAGAS evaluation directly within the Llama Stack server process.
  • Remote provider: Runs RAGAS evaluation as distributed jobs using OpenShift AI pipelines.

    The RAGAS evaluation provider is now included in the Llama Stack distribution.

Enable targeted deployment of workbenches to specific worker nodes in Red Hat OpenShift AI Dashboard using node selectors

Hardware profiles are now available as a Technology Preview. The hardware profiles feature enables users to target specific worker nodes for workbenches or model-serving workloads. It allows users to target specific accelerator types or CPU-only nodes.

This feature replaces the current accelerator profiles feature and container size selector field, offering a broader set of capabilities for targeting different hardware configurations. While accelerator profiles, taints, and tolerations provide some capabilities for matching workloads to hardware, they do not ensure that workloads land on specific nodes, especially if some nodes lack the appropriate taints.

The hardware profiles feature supports both accelerator and CPU-only configurations, along with node selectors, to enhance targeting capabilities for specific worker nodes. Administrators can configure hardware profiles in the settings menu. Users can select the enabled profiles using the UI for workbenches, model serving, and AI pipelines where applicable.

RStudio Server workbench image

With the RStudio Server workbench image, you can access the RStudio IDE, an integrated development environment for R. The R programming language is used for statistical computing and graphics to support data analysis and predictions.

To use the RStudio Server workbench image, you must first build it by creating a secret and triggering the BuildConfig, and then enable it in the OpenShift AI UI by editing the rstudio-rhel9 image stream. For more information, see Building the RStudio Server workbench images.

Important

Disclaimer: Red Hat supports managing workbenches in OpenShift AI. However, Red Hat does not provide support for the RStudio software. RStudio Server is available through rstudio.org and is subject to their licensing terms. You should review their licensing terms before you use this sample workbench.

CUDA - RStudio Server workbench image

With the CUDA - RStudio Server workbench image, you can access the RStudio IDE and NVIDIA CUDA Toolkit. The RStudio IDE is an integrated development environment for the R programming language for statistical computing and graphics. With the NVIDIA CUDA toolkit, you can enhance your work by using GPU-accelerated libraries and optimization tools.

To use the CUDA - RStudio Server workbench image, you must first build it by creating a secret and triggering the BuildConfig, and then enable it in the OpenShift AI UI by editing the rstudio-rhel9 image stream. For more information, see Building the RStudio Server workbench images.

Important

Disclaimer: Red Hat supports managing workbenches in OpenShift AI. However, Red Hat does not provide support for the RStudio software. RStudio Server is available through rstudio.org and is subject to their licensing terms. You should review their licensing terms before you use this sample workbench.

The CUDA - RStudio Server workbench image contains NVIDIA CUDA technology. CUDA licensing information is available in the CUDA Toolkit documentation. You should review their licensing terms before you use this sample workbench.

Support for multinode deployment of very large models
Serving models over multiple graphical processing unit (GPU) nodes when using a single-model serving runtime is now available as a Technology Preview feature. Deploy your models across multiple GPU nodes to improve efficiency when deploying large models such as large language models (LLMs). For more information, see Deploying models by using multiple GPU nodes.
Red Hat logoGithubredditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다. 최신 업데이트를 확인하세요.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

Theme

© 2026 Red Hat
맨 위로 이동