Questo contenuto non è disponibile nella lingua selezionata.

Chapter 3. New features and enhancements


This section describes new features and enhancements in Red Hat OpenShift AI 3.0.

3.1. New features

PyTorch v2.8.0 KFTO training images now generally available

You can now use PyTorch v2.8.0 training images for distributed workloads in OpenShift AI.

The following new images are available:

  • ROCm-compatible KFTO training image: quay.io/modh/training:py312-rocm63-torch280 Compatible with AMD accelerators supported by ROCm 6.3.
  • CUDA-compatible KFTO training image: quay.io/modh/training:py312-cuda128-torch280 Compatible with NVIDIA GPUs supported by CUDA 12.8.
Hardware profiles

A new Hardware Profiles feature replaces the previous Accelerator Profiles and legacy Container Size selector for workbenches.

Hardware profiles provide a more flexible and consistent way to define compute configurations for AI workloads, simplifying resource management across different hardware types.

Important

The Accelerator Profiles and legacy Container Size selector are now deprecated and will be removed in a future release.

Connections API now generally available

The Connections API is now available as a general availability (GA) feature in OpenShift AI.

This API enables you to create and manage connections to external data sources and services directly within OpenShift AI. Connections are stored as Kubernetes Secrets with standardized annotations, allowing protocol-based validation and routing across integrated components.

IBM Power and IBM Z architecture support

OpenShift AI now supports the IBM Power (ppc64le) and IBM Z (s390x) architectures.

This expanded platform support enables deployment of AI and machine learning workloads on IBM enterprise hardware, providing greater flexibility and scalability across heterogeneous environments.

For more information about supported software platforms, components, and dependencies, see the Knowledgebase article: Red Hat OpenShift AI: Supported Configurations.

IBM Power and IBM Z architecture support for TrustyAI

TrustyAI is now available as a general availability (GA) feature for the IBM Power (ppc64le) and IBM Z (s390x) architectures.

TrustyAI is an open-source Responsible AI toolkit that provides a suite of tools to support responsible and transparent AI workflows. It offers capabilities such as fairness and data drift metrics, local and global model explanations, text detoxification, language model benchmarking, and language model guardrails.

These capabilities help ensure transparency, accountability, and the ethical use of AI systems within OpenShift AI environments on IBM Power and IBM Z systems.

IBM Power and IBM Z architecture support for Model Registry

Model Registry is now available for the IBM Power (ppc64le) and IBM Z (s390x) architectures.

Model Registry is an open-source component that simplifies and standardizes the management of AI and machine learning (AI/ML) model lifecycles. It provides a centralized platform for storing, versioning, and governing models, enabling seamless collaboration across data science and MLOps teams.

Model Registry supports capabilities such as model versioning and lineage tracking, metadata management and model discovery, model approval and promotion workflows, integration with CI/CD and deployment pipelines, and governance, auditability, and compliance features on IBM Power and IBM Z systems.

IBM Power and IBM Z architecture support for Notebooks

Notebooks are now available for the IBM Power (ppc64le) and IBM Z (s390x) architectures.

Notebooks provide containerized, browser-based development environments for data science, machine learning, research, and coding within the OpenShift AI ecosystem. These environments can be launched through Workbenches and include the following options:

  • Jupyter Minimal notebook: A lightweight JupyterLab IDE for basic Python development and model prototyping.
  • Jupyter Data Science notebook: Preconfigured with popular data science libraries and tools for end-to-end workflows.
  • Jupyter TrustyAI notebook: An environment for Responsible AI tasks, including model explainability, fairness, data drift detection, and text detoxification.
  • Code Server: A browser-based VS Code environment for collaborative development with familiar IDE features.
  • Runtime Minimal and Runtime Data Science: Headless environments for automated workflows and consistent pipeline execution.
IBM Power architecture support for Feature Store

Feature Store is now supported on the IBM Power (ppc64le) architecture.

This support enables users to build, register, and manage features for machine learning models directly on IBM Power-based environments, with full integration with OpenShift AI.

IBM Power architecture support for AI Pipelines

AI Pipelines are now supported on the IBM Power (ppc64le) architecture.

This capability enables users to define, run, and monitor AI pipelines natively within OpenShift AI, leveraging the performance and scalability of IBM Power systems for AI workloads.

AI Pipelines executed on IBM Power systems maintain functional parity with x86 deployments.

Support for IBM Power accelerated Triton Inference Server

You can now enable Power architecture support for Triton inference server (CPU only) with FIL, PyTorch, Python and ONNX backend. You can deploy Triton inference server as a custom model serving runtime on IBM Power architecture in Red Hat OpenShift AI.

For details, see Triton Inference Server image.

Support for IBM Z accelerated Triton Inference Server

You can now enable Z architecture support for the Triton Inference Server (Telum I/Telum II) with multiple backend options, including ONNX-MLIR, Snap ML (C++), and PyTorch. The Triton Inference Server can be deployed as a custom model serving runtime on IBM Z architecture as a Technology Preview feature in Red Hat OpenShift AI.

For details, see IBM Z accelerated Triton Inference Server.

IBM Spyre AI Accelerator model serving support on IBM Z platforms

Model serving with the IBM Spyre AI Accelerator is now available as a general availability (GA) feature for IBM Z platforms.

The IBM Spyre Operator automates installation and integrates key components such as the device plugin, secondary scheduler, and monitoring.

For more information, see the IBM Spyre Operator catalog entry: IBM Spyre Operator — Red Hat Ecosystem Catalog.

Note

On IBM Z and IBM LinuxONE, Red Hat OpenShift AI supports deploying large language models (LLMs) with vLLM on IBM Spyre. The Triton Inference Server is supported on Telum (CPU) only.

For more information, see the following documentation:

Model customization components

OpenShift AI 3.0 introduces a suite of model customization components that streamline and enhance the process of preparing, fine-tuning, and deploying AI models.

The following components are now available:

  • Red Hat AI Python Index: A Red Hat–maintained Python package index that hosts supported builds of packages useful for AI and machine learning notebooks. Using the Red Hat AI Python Index ensures reliable and secure access to these packages in both connected and disconnected environments.
  • docling: A powerful Python library for advanced data processing that converts unstructured documents, such as PDFs or images, into clean, machine-readable formats for AI and ML workloads.
  • Synthetic Data Generation Hub (sdg-hub): A toolkit for generating high-quality synthetic data to augment datasets, improve model robustness, and address edge cases.
  • Training Hub: A framework that simplifies and accelerates the fine-tuning and customization of foundation models by using your own data.
  • Kubeflow Trainer: A Kubernetes-native capability that enables distributed training and fine-tuning of models while abstracting the underlying infrastructure complexity.
  • AI Pipelines: A Kubeflow-native capability for building configurable workflows across AI components, including all other model customization modules in this suite.

3.2. Enhancements

Hybrid search support for remote vector databases in Llama Stack

You can now enable hybrid search on remote vector databases in Llama Stack in OpenShift AI.

This enhancement allows enterprises to use their existing managed vector database infrastructure while maintaining high retrieval performance and flexibility across different database types.

IBM Spyre support for IBM Z with Caikit-TGIS adapter

You can now serve models with IBM Spyre AI accelerators on IBM Z (s390x architecture) by using the vLLM Spyre s390x ServingRuntime for KServe with the Caikit-TGIS gRPC adapter.

This integration enables high-performance model serving and inference for generative AI workloads on IBM Z systems within OpenShift AI.

Data Science Pipelines renamed to AI Pipelines

OpenShift AI now uses the term "AI Pipelines" instead of "Data Science Pipelines" to better reflect the broader range of AI and generative AI use cases supported by the platform.

In the default DataScienceCluster (default-dsc), the datasciencepipelines component has been renamed to aipipelines to align with this terminology update.

This is a naming change only. The AI pipelines functionality remains the same.

Model Catalog enhancements with model validation data

The Model Details page in the OpenShift AI Model Catalog now includes comprehensive model validation data, such as performance benchmarks, hardware compatibility, and other key metrics.

This enhancement provides a unified and detailed view consistent with the Jounce UI model details layout, enabling users to evaluate models more effectively from a single interface.

Model Catalog performance data with search and filtering

The Model Catalog now includes detailed performance and validation data for Red Hat-validated third-party models, such as benchmarks and hardware compatibility metrics.

Enhanced search and filtering capabilities, such as filtering by latency or hardware profile, help users quickly identify models optimized for their specific use cases and available resources, providing a unified discovery experience within the Red Hat AI Hub.

Distributed Inference with llm-d is now generally available (GA)

Distributed Inference with llm-d supports multi-model serving, intelligent inference scheduling, and disaggregated serving for improved GPU utilization on generative AI models.

Note

The following capabilities are not fully supported:

  • Wide Expert-Parallelism multi-node: Developer Preview.
  • Wide Expert-Parallelism on Blackwell B200: Not available but can be provided as a Technology Preview.
  • Multi-node on GB200: Not supported.
  • Gateway discovery and association are not supported in the UI during model deployment in this release. Users must associate models with Gateways by applying the resource manifests directly through the API or CLI.
User interface for Distributed Inference with llm-d deployment configuration

OpenShift AI now includes a user interface (UI) for configuring and deploying large language models (LLMs) on the llm-d Serving Runtime.

This streamlined interface simplifies common deployment scenarios by providing essential configuration options with sensible defaults while still allowing explicit selection of the llm-d runtime.

The new UI reduces setup complexity and helps users deploy distributed inference workloads more efficiently.

New navigation system

OpenShift AI 3.0 introduces a redesigned, streamlined navigation system that improves usability and workflow efficiency.

The new layout enables users to move seamlessly between features, simplifying access to key capabilities and supporting a smoother end-to-end experience.

Enhanced authentication for AI Pipelines

OpenShift AI 3.0 replaces oauth-proxy with kube-rbac-proxy for AI Pipelines as part of the platform-wide authentication transition.

This update improves security and compatibility, particularly for environments without an internal OAuth server, such as Red Hat OpenShift Service on AWS.

When migrating to kube-rbac-proxy, SubjectAccessReview (SAR) requirements and RBAC permissions change accordingly. Users who rely on the built-in ds-pipeline-user-access-<dspa-name> role are updated automatically, while others must ensure their roles include access to the datasciencepipelinesapplications/api subresource with the following verbs: create, update, patch, delete, get, list, and watch.

Observability and Grafana integration for Distributed Inference with llm-d

In OpenShift AI 3.0, platform administrators can connect observability components to Distributed Inference with llm-d deployments and integrate with self-hosted Grafana instances to monitor inference workloads.

This capability allows teams to collect and visualize Prometheus metrics from Distributed Inference with llm-d for performance analysis and custom dashboard creation.

Torna in cima
Red Hat logoGithubredditYoutubeTwitter

Formazione

Prova, acquista e vendi

Community

Informazioni sulla documentazione di Red Hat

Aiutiamo gli utenti Red Hat a innovarsi e raggiungere i propri obiettivi con i nostri prodotti e servizi grazie a contenuti di cui possono fidarsi. Esplora i nostri ultimi aggiornamenti.

Rendiamo l’open source più inclusivo

Red Hat si impegna a sostituire il linguaggio problematico nel codice, nella documentazione e nelle proprietà web. Per maggiori dettagli, visita il Blog di Red Hat.

Informazioni su Red Hat

Forniamo soluzioni consolidate che rendono più semplice per le aziende lavorare su piattaforme e ambienti diversi, dal datacenter centrale all'edge della rete.

Theme

© 2025 Red Hat