Chapter 7. Resolved issues


The following notable issues are resolved in Red Hat OpenShift AI 3.3. Security updates, bug fixes, and enhancements for Red Hat OpenShift AI 3.3 are released as asynchronous errata. All OpenShift AI errata advisories are published on the Red Hat Customer Portal.

7.1. Issues resolved in Red Hat OpenShift AI 3.3

RHOAIENG-24545 - Runtime images are not present in workbench after first start

Before this update, the list of runtime images was not properly populated for the first running workbench instance in the namespace. As a result, no image was shown for selection in the Elyra pipeline editor and required a workaround to populate the runtime image list.

The list of runtime images is now properly populated even for the first running workbench instance in the namespace without needing any extra workaround. Elyra now contains the required runtime image list in the editor as expected from the first workbench start. This issue has been resolved.

7.2. Issues resolved in Red Hat OpenShift AI 3.2

RHOAIENG-31071 - LM-Eval evaluations using Parquet datasets fail on IBM Z (s390x)

Before this update, Apache Arrow’s Parquet implementation contained endianness-specific code that was incompatible with big-endian IBM Z (s390x) architecture, causing byte-order mismatches when reading Parquet-formatted datasets. This resulted in LM-Eval evaluation tasks using datasets in Parquet format failing on s390x systems with parsing errors. A workaround applied compatibility patches to Apache Arrow and built a custom version specifically for s390x to support proper Parquet encoding/decoding.

RHOAIENG-38579 - Cannot stop models served with the Distributed Inference Server runtime

Before this update, you could not stop models served with the Distributed Inference Server with llm-d runtime from the OpenShift AI dashboard. This issue has been resolved.

RHOAIENG-38180 - Unable to send requests to Feature Store using the Feast SDK from workbench

Before this update, Feast was missing certificates and a service when running the default configuration, which prevented you from sending requests to your Feature Store by using the Feast SDK.

This issue has been resolved.

RHOAIENG-41588 - Standard openshift-container-platform route support added for dashboard access

Before this update, the transition to using Gateway API for Red Hat OpenShift AI version 3.0 required load balancer configuration. This configuration requirement caused usability issues and led to deployment delays for users of baremetal and cloud infrastructures. This issue has been resolved. The Gateway API now supports Cluster IP mode and standard openshift-container-platform route configuration in addition to the load balancer configuration option, simplifying dashboard access for the users.

For more information, see Configurable Ingress Mode for RHOAI 3.2 on Bare Metal, OpenStack and Private Clouds.

RHOAIENG-44616 - Inferencing with granite-3b model fails on IBM Power

Before this update, inference services for the granite-3b-code-instruct-2k model were created successfully. However, when a chat completion request was sent, it failed with an Internal server error. This issue is now resolved.

RHOAIENG-37686 - Metrics not displayed on the Dashboard due to image name mismatch in runtime detection logic

Previously, metrics were not displayed on the OpenShift AI dashboard because digest-based image names were not correctly recognized by the runtime detection system. This issue affected all InferenceService deployments in OpenShift AI 2.25 and later. This issue has been resolved.

RHOAIENG-37492 - Dashboard console link not accessible on IBM Power in 3.0.0

Previously, on private cloud deployments running on IBM Power, the OpenShift AI dashboard link was not visible in the OpenShift console when the dashboard was enabled in the DataScienceCluster configuration. As a result, users could not access the dashboard through the console without manually creating a route. This issue has been resolved.

RHOAIENG-1152 - Basic workbench creation process fails for users who have never logged in to the dashboard

This issue is now obsolete as of OpenShift AI 3.0. The basic workbench creation process has been updated, and this behavior no longer occurs.

RHOAIENG-9418 - Elyra raises error when you use parameters in uppercase

Previously, Elyra raised an error when you tried to run a pipeline that used parameters in uppercase. This issue is now resolved.

RHOAIENG-30493 - Error creating a workbench in a Kueue-enabled project

Previously, when using the dashboard to create a workbench in a Kueue-enabled project, the creation failed if Kueue was disabled on the cluster or if the selected hardware profile was not associated with a LocalQueue. In this case, the required LocalQueue could not be referenced, the admission webhook validation failed, and an error message was shown. This issue has been resolved.

RHOAIENG-32942 - Elyra requires unsupported filters on the REST API when pipeline store is Kubernetes

Before this update, when the pipeline store was configured to use Kubernetes, Elyra required equality (eq) filters that were not supported by the REST API. Only substring filters were supported in this mode. As a result, pipelines created and submitted through Elyra from a workbench could not run successfully. This issue has been resolved.

RHOAIENG-32897 - Pipelines defined with the Kubernetes API and invalid platformSpec do not appear in the UI or run

Before this update, when a pipeline version defined with the Kubernetes API included an empty or invalid spec.platformSpec field (for example, {} or missing the kubernetes key), the system misidentified the field as the pipeline specification. As a result, the REST API omitted the pipelineSpec, which prevented the pipeline version from being displayed in the UI and from running. This issue is now resolved.

RHOAIENG-31386 - Error deploying an Inference Service with authenticationRef

Before this update, when deploying an InferenceService with authenticationRef under external metrics, the authenticationRef field was removed. This issue is now resolved.

RHOAIENG-33914 - LM-Eval Tier2 task test failures

Previously, there could be failures with LM-Eval Tier2 task tests because the Massive Multitask Language Understanding Symbol Replacement (MMLUSR) tasks were broken. This issue is resolved witih the latest version of the trustyai-service-operator.

RHOAIENG-35532 - Unable to deploy models with HardwareProfiles and GPU

Before this update, the HardwareProfile to use GPU for model deployment had stopped working. The issue is now resolved.

RHOAIENG-4570 - Existing Argo Workflows installation conflicts with install or upgrade

Previously, installing or upgrading OpenShift AI on a cluster that already included an existing Argo Workflows instance could cause conflicts with the embedded Argo components deployed by Data Science Pipelines. This issue has been resolved. You can now configure OpenShift AI to use an existing Argo Workflows instance, enabling clusters that already run Argo Workflows to integrate with Data Science Pipelines without conflicts.

RHOAIENG-35623 - Model deployment fails when using hardware profiles

Previously, model deployments that used hardware profiles failed because the Red Hat OpenShift AI Operator did not inject the tolerations, nodeSelector, or identifiers from the hardware profile into the underlying InferenceService when manually creating InferenceService resources. As a result, the model deployment pods could not be scheduled to suitable nodes and the deployment fails to enter a ready state. This issue is now resolved.

Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top