Chapter 7. Resolved issues
The following notable issues are resolved in Red Hat OpenShift AI 2.17. Security updates, bug fixes, and enhancements for Red Hat OpenShift AI 2-latest are released as asynchronous errata. All OpenShift AI errata advisories are published on the Red Hat Customer Portal.
7.1. Issues resolved in Red Hat OpenShift AI 2.17 (February 2025)
RHOAIENG-16900 - Space-separated format in serving-runtime arguments can cause deployment failure
Previously, when deploying models, using a space-separated format to specify additional serving runtime arguments could cause unrecognized arguments
errors. This issue is now resolved.
RHOAIENG-16073 - Attribute error when retrieving the job client for a cluster object
Previously, when initializing a cluster with the get_cluster
method, assigning client = cluster.job_client
sometimes resulted in an AttributeError: 'Cluster' object has no attribute '_job_submission_client'
error. This issue is now resolved.
RHOAIENG-15773 - Cannot add a new model registry user
Previously, when managing the permissions of a model registry, you could not add a new user, group, or project as described in Managing model registry permissions. An HTTP request failed
error was displayed. This issue is now resolved.
RHOAIENG-14197 - Tooltip text for CPU and Memory graphs is clipped and therefore unreadable
Previously, when you hovered the cursor over the CPU and Memory graphs in the Top resource-consuming distributed workloads section on the Project metrics tab of the Distributed Workloads Metrics page, the tooltip text was clipped, and therefore unreadable. This issue is now resolved.
RHOAIENG-11024 - Resources entries get wiped out after removing opendatahub.io/managed annotation
Previously, manually removing the opendatahub.io/managed
annotation from any component deployment YAML file might have caused resource
entry values in the file to be erased. This issue is now resolved.
RHOAIENG-8102 - Incorrect requested resources reported when cluster has multiple cluster queues
Previously, when a cluster had multiple cluster queues, the resources requested by all projects was incorrectly reported as zero instead of the true value. This issue is now resolved.