Chapter 2. New features and enhancements


This section describes new features and enhancements in Red Hat OpenShift AI 2.25.

2.1. New features

Model registry and model catalog general availability

OpenShift AI model registry and model catalog are now available as general availability (GA) features.

A model registry acts as a central repository for administrators and data scientists to register, version, and manage the lifecycle of AI models before configuring them for deployment. A model registry is a key component for AI model governance.

The model catalog provides a curated library where data scientists and AI engineers can discover and evaluate the available generative AI models to find the best fit for their use cases.

LLM Compressor library added to OpenShift AI workbench images and pipelines

The LLM Compressor library is now generally available and fully integrated into standard OpenShift AI workbench images and pipelines.

This library provides a supported, integrated method to optimize large language models for improved inference, particularly for deployment on vLLM, without leaving your OpenShift AI environment. You can run model compression as an interactive notebook task or as a batch job in a pipeline, which significantly reduces the hardware costs and improves the inference speeds of their generative AI workloads.

Use an existing Argo Workflows instance with pipelines

You can now configure OpenShift AI to use an existing Argo Workflows instance instead of the one included with Data Science Pipelines. This feature supports users who maintain their own Argo Workflows environments and simplifies adoption of pipelines on clusters where Argo Workflows is already deployed.

A new global configuration option disables deployment of the embedded Argo WorkflowControllers, allowing clusters that already use Argo Workflows to integrate with pipelines without conflicts. Cluster administrators can choose whether to deploy the embedded controllers or use their own Argo instance and manage both lifecycles independently. For more information, see Configuring pipelines with your own Argo Workflows instance.

Support added for workbench images
You can now install and upgrade Python 3.12 workbench images in OpenShift AI for your JupyterLab and code-server IDEs.

2.2. Enhancements

Support for customizing OAuth proxy sidecar resource allocation

You can now customize the CPU and memory requests and limits for the OAuth proxy sidecar in workbench pods. To do this, add one or more of the following annotations to the notebooks custom resource (CR):

  • notebooks.opendatahub.io/auth-sidecar-cpu-request
  • notebooks.opendatahub.io/auth-sidecar-memory-request
  • notebooks.opendatahub.io/auth-sidecar-cpu-limit
  • notebooks.opendatahub.io/auth-sidecar-memory-limit

    If you do not specify these annotations, the sidecar uses the default values of 100m CPU and 64Mi memory to maintain backward compatibility. After you add or modify the annotations, you must restart the workbench for the new resource allocations to take effect.

    The annotation values must follow the Kubernetes resource unit convention. For more information, see Resource units in Kubernetes.

Enhanced workbench authentication
Workbench authentication is now smoother in OpenShift AI. When you create a new workbench, a reconciler automatically generates the required OAuthClient, removing the need to manually grant permissions to the oauth-proxy container.
Support for flexible storage class management
With this release, administrators can now choose any supported access mode for a storage class when adding cluster storage to a project or workbench in OpenShift AI. This enhancement removes deployment issues caused by unsupported storage classes or incorrect access mode assumptions.
Support for deployment on the Grace Hopper Arm platform
OpenShift AI can now be deployed on the Grace Hopper Arm platform. This enhancement expands hardware compatibility beyond x86 architectures, enabling you to deploy and run workloads on Arm-based NVIDIA Grace Hopper systems. These systems provide a scalable, power-efficient, and high-performance environment for AI and machine-learning workloads.
Note

The following components and image variants are currently unavailable:

  • The pytorch and pytorch+llmcompressor workbench and pipeline runtime images
  • CUDA-accelerated Kubeflow training images
  • The fms-hf-tuning image
Define and manage pipelines with Kubernetes API

You can now define and manage data science pipelines and pipeline versions by using the Kubernetes API, which stores them as custom resources in the cluster instead of the internal database. This enhancement makes it easier to use OpenShift GitOps (Argo CD) or similar tools to manage pipelines, while still allowing you to manage them through the OpenShift AI user interface, API, and kfp SDK.

This option, enabled by default, is configurable with the Store pipeline definitions in Kubernetes checkbox when you create or edit a pipeline server. OpenShift AI administrators and project owners can also configure this option by setting the spec.apiServer.pipelineStore field to kubernetes or database in the DataSciencePipelinesApplication (DSPA) custom resource. For more information, see Defining a pipeline by using the Kubernetes API.

Support added for configuring TrustyAI global settings with the DataScienceCluster (DSC) resource
Administrators can now declaratively manage settings such as LMEval’s allowOnline and allowCodeExecution through the DSC interface, with changes automatically propagated to the TrustyAI operator. This unifies TrustyAI configuration with other OpenShift AI components and removes the need for manual ConfigMap edits or Operator restarts.
Support added to move unwanted files to trash directory
You can now increase your container storage by moving and permanently deleting your unwanted files to a trash directory in the Jupyter Notebook. To delete these files, click the Move to Trash icon on your Jupyter notebook toolbar and browse through your trash directory. Select the files that you would like to permanently delete, and delete them to prevent full notebook storage.
Updated workbench images
A new set of workbench images is now available. These pre-built workbench images and upgraded packages include Python libraries and frameworks for data analysis and exploration, as well as CUDA and ROCm packages for accelerating compute-intensive tasks. Additionally, they feature runtimes and updated IDEs for RStudio and code-server.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat