このコンテンツは選択した言語では利用できません。
Chapter 7. Resolved issues
The following notable issues are resolved in Red Hat OpenShift AI 3.2. Security updates, bug fixes, and enhancements for Red Hat OpenShift AI 3.2 are released as asynchronous errata. All OpenShift AI errata advisories are published on the Red Hat Customer Portal.
7.1. Issues resolved in Red Hat OpenShift AI 3.2 リンクのコピーリンクがクリップボードにコピーされました!
RHOAIENG-31071 - LM-Eval evaluations using Parquet datasets fail on IBM Z (s390x)
Before this update, Apache Arrow’s Parquet implementation contained endianness-specific code that was incompatible with big-endian IBM Z (s390x) architecture, causing byte-order mismatches when reading Parquet-formatted datasets. This resulted in LM-Eval evaluation tasks using datasets in Parquet format failing on s390x systems with parsing errors. A workaround applied compatibility patches to Apache Arrow and built a custom version specifically for s390x to support proper Parquet encoding/decoding.
RHOAIENG-38579 - Cannot stop models served with the Distributed Inference Server runtime
Before this update, you could not stop models served with the Distributed Inference Server with llm-d runtime from the OpenShift AI dashboard. This issue has been resolved.
RHOAIENG-38180 - Unable to send requests to Feature Store using the Feast SDK from workbench
Before this update, Feast was missing certificates and a service when running the default configuration, which prevented you from sending requests to your Feature Store by using the Feast SDK.
This issue has been resolved.
RHOAIENG-41588 - Standard openshift-container-platform route support added for dashboard access
Before this update, the transition to using Gateway API for Red Hat OpenShift AI version 3.0 required load balancer configuration. This configuration requirement caused usability issues and led to deployment delays for users of baremetal and cloud infrastructures. This issue has been resolved. The Gateway API now supports Cluster IP mode and standard openshift-container-platform route configuration in addition to the load balancer configuration option, simplifying dashboard access for the users.
For more information, see Configurable Ingress Mode for RHOAI 3.2 on Bare Metal, OpenStack and Private Clouds.
RHOAIENG-44616 - Inferencing with granite-3b model fails on IBM Power
Before this update, inference services for the granite-3b-code-instruct-2k model were created successfully. However, when a chat completion request was sent, it failed with an Internal server error. This issue is now resolved.
RHOAIENG-37686 - Metrics not displayed on the Dashboard due to image name mismatch in runtime detection logic
Previously, metrics were not displayed on the OpenShift AI dashboard because digest-based image names were not correctly recognized by the runtime detection system. This issue affected all InferenceService deployments in OpenShift AI 2.25 and later. This issue has been resolved.
RHOAIENG-37492 - Dashboard console link not accessible on IBM Power in 3.0.0
Previously, on private cloud deployments running on IBM Power, the OpenShift AI dashboard link was not visible in the OpenShift console when the dashboard was enabled in the DataScienceCluster configuration. As a result, users could not access the dashboard through the console without manually creating a route. This issue has been resolved.
RHOAIENG-1152 - Basic workbench creation process fails for users who have never logged in to the dashboard
This issue is now obsolete as of OpenShift AI 3.0. The basic workbench creation process has been updated, and this behavior no longer occurs.