Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 6. Resolved issues
The following notable issues are resolved in Red Hat OpenShift AI 2.24. Security updates, bug fixes, and enhancements for Red Hat OpenShift AI 2.24 are released as asynchronous errata. All OpenShift AI errata advisories are published on the Red Hat Customer Portal.
6.1. Issues resolved in Red Hat OpenShift AI 2.24 Link kopierenLink in die Zwischenablage kopiert!
OCPBUGS-44432 - ImageStream unable to import image tags in a disconnected OpenShift environment
Before this update, if you used the ImageTagMirrorSet
(ITMS) or ImageDigestMirrorSet
(IDMS) in a disconnected OpenShift environment, the ImageStream resource prevented the mirror from importing the image, and a RHOAI workbench instance could not be created. This issue is now resolved in OpenShift Container Platform 4.19.13 or later. Update your OpenShift instances to 4.19.13 or later to avoid this issue.
RHOAIENG-29729 - Model registry Operator in a restart loop after upgrade
After upgrading from OpenShift AI 2.22 or earlier with the model registry component enabled, the model registry Operator could enter a restart loop. This was due to an insufficient memory limit for the manager container in the model-registry-operator-controller-manager
pod. This issue is now resolved.
RHOAIENG-31248 - KServe http: TLS handshake error
Previously, the OpenShift CA auto-injection in the localmodelcache
validation webhook configuration was missing the necessary annotation, leading to repeated TLS handshake errors. This issue is now resolved.
RHOAIENG-31376 - Inference service creation using vLLM runtime fails on IBM Power cluster
Previously, when you attempted to create an inference service using the vLLM runtime on an IBM Power cluster, it failed with the following error: OpNamespace' '_C_utils' object has no attribute 'init_cpu_threads_env
error. This issue is now resolved.
RHOAIENG-31377 - Inference service creation fails on IBM Power cluster
Previously, when you attempted to create an inference service using the vLLM runtime on an IBM Power cluster, it failed with the following error: ValueError: 'aimv2' is already used by a Transformers config, pick another name
. This issue is now resolved.
RHOAIENG-31498 - Incorrect inference URL in LlamaStack LMEval provider
Before this update, when you ran evaluations on Llama Stack using the LMEval provider, the evaluation jobs erroneously used the model server endpoint as v1/openai/v1/completions
. This resulted in a job failure because the correct model server endpoint was v1/completions
. This issue is now resolved.
RHOAIENG-31536 - Prometheus configuration not reconciled properly
Before this update, the Monitoring resource did not reconcile properly and showed a "Not Ready" status when upgrading to or installing 2.23. This issue occurred because the resource required the OpenTelemetry and Cluster Observability Operators to be installed, even if no new monitoring or tracing configurations were added to the DSCInitialization
resource. As a result, Prometheus configuration did not reconcile and led to empty or outdated alert configurations. This issue is now resolved.
RHOAIENG-4148 - Standalone notebook fails to start due to character length
Previously, the notebook controller logic did not proactively check username lengths before it attempted to create resources. The notebook controller creates OpenShift resources using your username directly. As a result, if the combined name of the OpenShift Route and namespace exceeded the 63-character limit for DNS subdomains, the creation of the OpenShift Route failed with the following validation error: spec.host: ... must be no more than 63 characters
. Without the Route, the dependent OAuthClient could not be configured, and workbenches could not start.
With this release, the notebook controller’s logic has been updated to proactively check name character lengths before creating resources. For Routes, if the combined length of the notebook name and namespace would exceed the 63-character limit, the controller now creates the Route using the generateName
field with a prefix of nb-
. For StatefulSets, if the notebook name is longer than 52 characters, the controller also uses generateName: "nb-"
to prevent naming conflicts.
RHOAIENG-3913 - Red Hat OpenShift AI Operator incorrectly shows Degraded
condition of False
with an error
Previously, if you had enabled the KServe component in the DataScienceCluster (DSC) object used by the OpenShift AI Operator, but had not installed the dependent Red Hat OpenShift Service Mesh and Red Hat OpenShift Serverless Operators, the kserveReady
condition in the DSC object correctly showed that KServe is not ready. However, the Degraded
condition incorrectly showed a value of False
. This issue is now resolved.
RHOAIENG-29352 - Missing Documentation and Support menu items
Previously, in the OpenShift AI top navigation bar, when you clicked the help icon (
), the menu contained only the About menu item and the Documentation and Support menu items were missing. This issue is now resolved.
RHAIENG-496 - Error creating LlamaStackDistribution as a non-administrator user
Previously, non-administrator requests failed due to insufficient role-based access control (RBAC) as the deployed role definitions were outdated or incomplete for the current Llama Stack resources (for example, the LlamaStackDistribution
CRD). This issue is now resolved.