이 콘텐츠는 선택한 언어로 제공되지 않습니다.

Chapter 2. New features and enhancements


This section describes new features and enhancements in Red Hat OpenShift AI 2.24.

2.1. New features

Support added to TrustyAI for KServe InferenceLogger integration

TrustyAI now provides support for KServe inference deployments through automatic InferenceLogger configuration.

Both KServe Raw and Serverless are supported and deployment mode is automatically detected using the InferenceService annotations.

Enhanced workload scheduling with Kueue

OpenShift AI now provides enhanced workload scheduling with the Red Hat build of Kueue. Kueue is a job-queuing system that provides resource-aware scheduling for workloads, improving GPU utilization and ensuring fair resource sharing with intelligent, quota-based scheduling across AI workloads.

This feature expands Kueue’s workload support in OpenShift AI to include workbenches (Notebook) and model serving (InferenceService), in addition to the previously supported distributed training jobs (RayJob, RayCluster, PyTorchJob).

A validating webhook now handles queue enforcement. This webhook ensures that in any project enabled for Kueue management (with the kueue.openshift.io/managed=true label), all supported workloads must specify a target LocalQueue (with the kueue.x-k8s.io/queue-name label). This replaces the Validating Admission Policy used in previous versions.

For more information, see Managing workloads with Kueue.

Support added to view git commit hash in an image
You can now view the git commit hash in an image. This feature allows you to determine if the image has changed, even if the version number stays the same. You can also trace the image back to the source code if needed.
Support added for data science pipelines with Elyra
When using data science pipelines with Elyra, you now have the option to use a service-based URL rather than a route-based URL. Your data science pipeline can be used from the service directly by including the port number.
Workbench images mirrored by default

The latest version of workbench images is mirrored by default when you mirror images to a private registry for a disconnected installation. As an administrator, you can mirror older versions of workbench images through the additionalImages field in the Disconnected Helper configuration.

Important

Only the latest version of workbench images is supported with bug fixes and security updates. Older versions of workbench images are available, but they do not receive bug fixes or security updates.

2.2. Enhancements

Support for serving models from a Persistent Volume Claim (PVC)
Red Hat OpenShift AI now supports serving models directly from existing cluster storage. With this feature, you can serve models from pre-existing persistent volume claim (PVC) locations and create new PVCs for model storage within the interface.
New option to disable caching for all pipelines in a project

You can now disable caching for all data science pipelines in the pipeline server, which overrides all pipeline and task-level caching settings. This global setting is useful for scenarios such as debugging, development, or cases that require deterministic re-execution.

This option is configurable with the Allow caching to be configured per pipeline and task checkbox when you create or edit a pipeline server. Cluster administrators can also configure this spec.apiServer.cacheEnabled option. By default, this field is set to true. To disable caching cluster-wide, set this field to false. For more information, see Overview of data science pipelines caching.

Migration of production images from Quay to Red Hat Registry
RHOAI production images that fall under the current support model have been migrated from Quay to Red Hat Registry. They will continue to receive updates as defined by their release channel. Previously released images will remain in Quay.
JupyterLab version updated
JupyterLab is updated from version 4.2 to 4.4. This update includes a Move to Trash dropdown option when you right click on a folder, as well as other bug fixes and enhancements.
Updated vLLM component versions

OpenShift AI supports the following vLLM versions for each listed component:

  • vLLM CUDA: v0.10.0.2
  • vLLM ROCm: v0.10.0.2
  • vLLM Gaudi: v0.10.0.2
  • vLLM Power/Z: v0.10.0.2
  • Openvino Model Server: v2025.2.1

For more information, see vllm in GitHub.

맨 위로 이동
Red Hat logoGithubredditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다. 최신 업데이트를 확인하세요.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

Theme

© 2025 Red Hat