Chapter 5. Resolved issues


The following notable issues are resolved in Red Hat OpenShift AI 2.8.5.

Security updates, bug fixes, and enhancements for Red Hat OpenShift AI 2.8 are released as asynchronous errata. All OpenShift AI errata advisories are published on the Red Hat Customer Portal.

Important

To receive the latest updates for OpenShift AI 2.8, your installation of the Red Hat OpenShift AI Operator must be configured to use the eus-2.8 update channel.

For the latest information about the 2.8 release lifecycle, including the full support phase window, see Red Hat OpenShift AI Self-Managed Life Cycle.

5.1. Container grade release, Red Hat OpenShift AI 2.8.5 (November 2024)

This release provides updates for one or more of the Red Hat OpenShift AI container images that are listed in the Red Hat Container Catalog, which is part of the Red Hat Ecosystem Catalog.

For information about container health grades, see Container Health Index grades as used inside the Red Hat Container Catalog.

For a complete list of updates in a release, see the associated errata. All OpenShift AI errata advisories are published on the Red Hat Customer Portal.

5.2. Issues resolved in Red Hat OpenShift AI 2.8.4 (August 2024)

RHOAIENG-8898 - Cannot upgrade between versions of OpenShift AI 2.8.z

Previously, when an OpenShift cluster reached its resource capacity limit, you were unable to upgrade between versions of OpenShift AI 2.8.z. This issue occurred because the deployment pod failed to determine if the OpenShift AI Operator was installed on a Self-Managed cluster or a managed cloud service cluster. This issue is now resolved.

5.3. Issues resolved in Red Hat OpenShift AI 2.8.3 (June 2024)

For a complete list of updates, see the RHBA-2024:3930 advisory.

RHOAIENG-6817 (previously documented as RHOAIENG-5025) - Self-signed certificates do not apply to the first created workbench

Previously, after self-signed certificates were configured centrally, the certificates did not apply to the first workbench created in a data science project. This issue is now resolved.

RHOAIENG-6455 - Workbenches created from the Jupyter tile lose self-signed certificates after being restarted

Previously, in the rhods-notebooks namespace, if you restarted a workbench created from the Jupyter tile on the OpenShift AI dashboard that was configured with self-signed certificates, the certificates were no longer attached to the workbench after the restart. This issue is now resolved.

RHOAIENG-3378 - Internal Image Registry is an undeclared hard dependency for Jupyter notebooks spawn process

Previously, before you could start OpenShift AI notebooks and workbenches, you must have already enabled the internal, integrated container image registry in OpenShift Container Platform. Attempts to start notebooks or workbenches without first enabling the image registry failed with an "InvalidImageName" error. This issue is now resolved.

5.4. Issues resolved in Red Hat OpenShift AI 2.8.2 (May 2024)

For a complete list of updates, see the RHBA-2024:2745 advisory.

RHOAIENG-6433 - Enabling the kueue component incorrectly creates a namespace

Previously, if you enabled the kueue component in the default DataScienceCluster object, the opendatahub namespace was incorrectly created. This issue is now resolved so that the opendatahub namespace is no longer created by this component.

RHOAIENG-6276 - Enabling the kueue and ray components causes a reconciliation loop

Previously, if you enabled both the kueue and ray components in the default DataScienceCluster object, the DataScienceCluster object would continuously reconcile the components. This issue is now resolved.

RHOAIENG-5575 - Enabling the kueue component leaves a namespace that is not required by applications

Previously, if you enabled the kueue or ray component in the default DataScienceCluster object, the opendatahub namespace was incorrectly created. This issue is now resolved so that the opendatahub namespace is no longer created by these components, and on existing installations, the namespace is removed after upgrading to OpenShift AI 2.8.2 if no user pods are running there.

5.5. Issues resolved in Red Hat OpenShift AI 2.8.1 (April 2024)

For a complete list of updates, see the RHBA-2024:1748 advisory.

RHOAIENG-4937 (previously documented as RHOAIENG-4572) - Unable to run data science pipelines after install and upgrade in certain circumstances

Previously, you were unable to run data science pipelines after installing or upgrading OpenShift AI in the following circumstances:

  • You installed OpenShift AI and you had a valid CA certificate. Within the default-dsci object, you changed the managementState field for the trustedCABundle field to Removed post-installation.
  • You upgraded OpenShift AI from version 2.6 to version 2.8 and you had a valid CA certificate.
  • You upgraded OpenShift AI from version 2.7 to version 2.8 and you had a valid CA certificate.

This issue is now resolved.

RHOAIENG-4327 - Workbenches do not use the self-signed certificates from centrally configured bundle automatically

There are two bundle options to include self-signed certificates in OpenShift AI, ca-bundle.crt and odh-ca-bundle.crt. Previously, workbenches did not automatically use the self-signed certificates from the centrally configured bundle automatically and you had to define environment variables that pointed to your certificate path. This issue is now resolved.

RHOAIENG-673 (previously documented as RHODS-12946) - Cannot install from PyPI mirror in disconnected environment or when using private certificates

In disconnected environments, Red Hat OpenShift AI cannot connect to the public-facing PyPI repositories, so you must specify a repository inside your network. Previously, if you were using private TLS certificates and a data science pipeline was configured to install Python packages, the pipeline run would fail. This issue is now resolved.

RHOAIENG-637 (previously documented as RHODS-12904) - Pipeline submitted from Elyra might fail when using private certificate

If you use a private TLS certificate and you submit a pipeline from Elyra, previously the pipeline could fail with a certificate verify failed error message. This issue is now resolved.

5.6. Issues resolved in Red Hat OpenShift AI 2.8.0 (March 2024)

For a complete list of updates, see the RHBA-2024:1371 advisory.

RHOAIENG-3355 - OVMS on KServe does not use accelerators correctly

Previously, when you deployed a model using the single-model serving platform and selected the OpenVINO Model Server serving runtime, if you requested an accelerator to be attached to your model server, the accelerator hardware was detected but was not used by the model when responding to queries. This issue is now resolved.

RHOAIENG-2869 - Cannot edit existing model framework and model path in a multi-model project

Previously, when you tried to edit a model in a multi-model project using the Deploy model dialog, the Model framework and Path values did not update. This issue is now resolved.

RHOAIENG-2724 - Model deployment fails because fields automatically reset in dialog

Previously, when you deployed a model or edited a deployed model, the Model servers and Model framework fields in the "Deploy model" dialog might have reset to the default state. The Deploy button might have remained enabled even though these mandatory fields no longer contained valid values. This issue is now resolved.

RHOAIENG-2099 - Data science pipeline server fails to deploy in fresh cluster

Previously, when you created a data science pipeline server on a fresh cluster, the user interface remained in a loading state and the pipeline server did not start. This issue is now resolved.

RHOAIENG-1199 (previously documented as ODH-DASHBOARD-1928) - Custom serving runtime creation error message is unhelpful

Previously, when you tried to create or edit a custom model-serving runtime and an error occurred, the error message did not indicate the cause of the error. The error messages have been improved.

RHOAIENG-675 (previously documented as RHODS-12906) - Cannot use ModelMesh with object storage that uses private certificates

Previously, when you stored models in an object storage provider that used a private TLS certificate, the model serving pods failed to pull files from the object storage, and the signed by unknown authority error message was shown. This issue is now resolved.

RHOAIENG-556 - ServingRuntime for KServe model is created regardless of error

Previously, when you tried to deploy a KServe model and an error occurred, the InferenceService custom resource (CR) was still created and the model was shown in the Data Science Project page, but the status would always remain unknown. The KServe deploy process has been updated so that the ServingRuntime is not created if an error occurs.

RHOAIENG-548 (previously documented as ODH-DASHBOARD-1776) - Error messages when user does not have project administrator permission

Previously, if you did not have administrator permission for a project, you could not access some features, and the error messages did not explain why. For example, when you created a model server in an environment where you only had access to a single namespace, an Error creating model server error message appeared. However, the model server is still successfully created. This issue is now resolved.

RHOAIENG-66 - Ray dashboard route deployed by CodeFlare SDK exposes self-signed certs instead of cluster cert

Previously, when you deployed a Ray cluster by using the CodeFlare SDK with the openshift_oauth=True option, the resulting route for the Ray cluster was secured by using the passthrough method and as a result, the self-signed certificate used by the OAuth proxy was exposed. This issue is now resolved.

RHOAIENG-12 - Cannot access Ray dashboard from some browsers

In some browsers, users of the distributed workloads feature might not have been able to access the Ray dashboard because the browser automatically changed the prefix of the dashboard URL from http to https. This issue is now resolved.

RHODS-6216 - The ModelMesh oauth-proxy container is intermittently unstable

Previously, ModelMesh pods did not deploy correctly due to a failure of the ModelMesh oauth-proxy container. This issue occurred intermittently and only if authentication was enabled in the ModelMesh runtime environment. This issue is now resolved.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.