Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 7. Resolved issues


The following notable issues are resolved in Red Hat OpenShift AI 2.19.2. Security updates, bug fixes, and enhancements for Red Hat OpenShift AI 2.19 are released as asynchronous errata. All OpenShift AI errata advisories are published on the Red Hat Customer Portal.

7.1. Security updates in Red Hat OpenShift AI 2.19.2 (July 2025)

This release provides security updates. For a complete list of updates, see the associated errata advisory on the Red Hat Customer Portal.

7.2. Issues resolved in Red Hat OpenShift AI 2.19.1 (July 2025)

RHOAIENG-27374 (previously documented as RHOAIENG-26263) - Node selector not cleared when changing the hardware profile for a workbench or model deployment

Previously, if you edited an existing workbench or model deployment to change the hardware profile from one that included a node selector to one that did not, the old node placement settings might not have been removed. As a result, your workload could still be scheduled based on the old node selector, leading to an inefficient use of resources. This issue is now resolved.

RHOAIENG-24967 (previously documented as RHOAIENG-24886) - Cannot deploy OCI model when Model URI field includes prefix

Previously, when deploying an OCI model, if you pasted the complete URI in the Model URI field and then moved the cursor to another field, the URL prefix (for example, http://) was removed from the Model URI field, but it was included in the storageUri value in the InferenceService resource. As a result, you could not deploy the OCI model. This issue is now resolved.

7.3. Issues resolved in Red Hat OpenShift AI 2.19.0 (April 2025)

RHOAIENG-6486 - Pod labels, annotations, and tolerations cannot be configured when using the Elyra JupyterLab extension with the TensorFlow 2024.1 notebook image

Previously, using TensorFlow-based workbench images did not allow users to use pod labels, annotations, and tolerations when using the Elyra JupyterLab extension. With the 2025.1 images, the TensorFlow-based workbench is upgraded with the Kubeflow pipeline SDK (kfp). With the upgraded SDK, you can set pod labels, annotations, and tolerations when using the Elyra extension to schedule the Data Science pipelines.

RHOAIENG-21197 - Deployment failure when using vLLM runtime on AMD GPU accelerators in a FIPS-enabled cluster

Previously, when deploying a model by using the vLLM runtime on AMD GPU accelerators in a FIPS-enabled cluster, the deployment could fail. This issue is now resolved.

RHOAIENG-20245 - Certain model registry operations remove custom properties from the registered model and version

Previously, editing the description, labels, or properties of a model version removed labels and custom properties from the associated model. Deploying a model version, or editing its model source format, removed labels and custom properties from the version and from the associated model. This issue is now resolved.

RHOAIENG-19954 - Kueue alerts not monitored in OpenShift

Previously, in the OpenShift console, Kueue alerts were not monitored. The new ServiceMonitor resource rejected the usage of the BearerTokenFile field, which meant that Prometheus did not have the required permissions to scrape the target. As a result, the Kueue alerts were not shown on the Observe Alerting page, and the Kueue targets were not shown on the Observe Targets page. This issue is now resolved.

RHOAIENG-19716 - The system-authenticated user group cannot be removed by using the dashboard

Previously, after installing or upgrading Red Hat OpenShift AI, the system-authenticated user group displayed in Settings > User management under Data science user groups. If you removed this user group from Data science user groups and saved the changes, the group was erroneously added again. This issue is now resolved.

RHOAIENG-18238 - Inference endpoints for deployed models return 403 error after upgrading the Authorino Operator

Previously, after upgrading the Authorino Operator, the automatic Istio sidecar injection may not have been reapplied. Without the sidecar, Authorino was not correctly integrated into the service mesh, and caused inference endpoint requests to fail with an HTTP 403 error. This issue is now resolved.

RHOAIENG-11371 - Incorrect run status reported for runs using ExitHandler

Previously, when using pipeline exit handlers (dsl.ExitHandler), if a task inside the handle failed but the exit task succeeded, the overall pipeline run status was inaccurately reported as Succeeded instead of Failed. This issue is now resolved.

RHOAIENG-16146 - Connection sometimes not preselected when deploying a model from model registry

Previously, when deploying a model from a model registry, the object storage connection (previously called data connection) might not have been preselected. This issue is now resolved.

RHOAIENG-21068 - InstructLab pipeline run cannot be created when the parameter sdg_repo_pr is left empty

Previously, when creating a pipeline run of the InstructLab pipeline, if the parameter sdg_repo_pr was left empty, the pipeline run could not be created and an error message appeared. This issue is now resolved.

Nach oben
Red Hat logoGithubredditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können. Entdecken Sie unsere neuesten Updates.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

Theme

© 2025 Red Hat