此内容没有您所选择的语言版本。

Chapter 6. Resolved issues


The following notable issues are resolved in Red Hat OpenShift AI 2.25. Security updates, bug fixes, and enhancements for Red Hat OpenShift AI 2.25 are released as asynchronous errata. All OpenShift AI errata advisories are published on the Red Hat Customer Portal.

6.1. Issues resolved in Red Hat OpenShift AI 2.25

RHOAIENG-9418 - Elyra raises error when you use parameters in uppercase

Previously, Elyra raised an error when you tried to run a pipeline that used parameters in uppercase. This issue is now resolved.

RHOAIENG-30493 - Error creating a workbench in a Kueue-enabled project

Previously, when using the dashboard to create a workbench in a Kueue-enabled project, the creation failed if Kueue was disabled on the cluster or if the selected hardware profile was not associated with a LocalQueue. In this case, the required LocalQueue could not be referenced, the admission webhook validation failed, and an error message was shown. This issue has been resolved.

RHOAIENG-32942 - Elyra requires unsupported filters on the REST API when pipeline store is Kubernetes

Before this update, when the pipeline store was configured to use Kubernetes, Elyra required equality (eq) filters that were not supported by the REST API. Only substring filters were supported in this mode. As a result, pipelines created and submitted through Elyra from a workbench could not run successfully. This issue has been resolved.

RHOAIENG-32897 - Pipelines defined with the Kubernetes API and invalid platformSpec do not appear in the UI or run

Before this update, when a pipeline version defined with the Kubernetes API included an empty or invalid spec.platformSpec field (for example, {} or missing the kubernetes key), the system misidentified the field as the pipeline specification. As a result, the REST API omitted the pipelineSpec, which prevented the pipeline version from being displayed in the UI and from running. This issue is now resolved.

RHOAIENG-31386 - Error deploying an Inference Service with authenticationRef

Before this update, when deploying an InferenceService with authenticationRef under external metrics, the authenticationRef field was removed. This issue is now resolved.

RHOAIENG-33914 - LM-Eval Tier2 task test failures

Previously, there could be failures with LM-Eval Tier2 task tests because the Massive Multitask Language Understanding Symbol Replacement (MMLUSR) tasks were broken. This issue is resolved witih the latest version of the trustyai-service-operator.

RHOAIENG-35532 - Unable to deploy models with HardwareProfiles and GPU

Before this update, the HardwareProfile to use GPU for model deployment had stopped working. The issue is now resolved.

RHOAIENG-4570 - Existing Argo Workflows installation conflicts with install or upgrade

Previously, installing or upgrading OpenShift AI on a cluster that already included an existing Argo Workflows instance could cause conflicts with the embedded Argo components deployed by Data Science Pipelines. This issue has been resolved. You can now configure OpenShift AI to use an existing Argo Workflows instance, enabling clusters that already run Argo Workflows to integrate with Data Science Pipelines without conflicts.

RHOAIENG-35623 - Model deployment fails when using hardware profiles

Previously, model deployments that used hardware profiles failed because the Red Hat OpenShift AI Operator did not inject the tolerations, nodeSelector, or identifiers from the hardware profile into the underlying InferenceService when manually creating InferenceService resources. As a result, the model deployment pods could not be scheduled to suitable nodes and the deployment fails to enter a ready state. This issue is now resolved.

返回顶部
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2025 Red Hat