Release notes


Red Hat OpenShift AI Cloud Service 1

Features, enhancements, resolved issues, and known issues associated with this release

Abstract

These release notes provide an overview of new features, enhancements, resolved issues, and known issues in this release of Red Hat OpenShift AI. OpenShift AI is currently available in Red Hat OpenShift Dedicated and Red Hat OpenShift Service on Amazon Web Services (ROSA).

Chapter 1. Overview of OpenShift AI

Red Hat OpenShift AI is a platform for data scientists and developers of artificial intelligence and machine learning (AI/ML) applications.

OpenShift AI provides an environment to develop, train, serve, test, and monitor AI/ML models and applications on-premise or in the cloud.

For data scientists, OpenShift AI includes Jupyter and a collection of default workbench images optimized with the tools and libraries required for model development, and the TensorFlow and PyTorch frameworks. Deploy and host your models, integrate models into external applications, and export models to host them in any hybrid cloud environment. You can enhance your data science projects on OpenShift AI by building portable machine learning (ML) workflows with data science pipelines, using Docker containers. You can also accelerate your data science experiments through the use of graphics processing units (GPUs) and Intel Gaudi AI accelerators.

For administrators, OpenShift AI enables data science workloads in an existing Red Hat OpenShift or ROSA environment. Manage users with your existing OpenShift identity provider, and manage the resources available to workbenches to ensure data scientists have what they require to create, train, and host models. Use accelerators to reduce costs and allow your data scientists to enhance the performance of their end-to-end data science workflows using graphics processing units (GPUs) and Intel Gaudi AI accelerators.

OpenShift AI has two deployment options:

  • Self-managed software that you can install on-premise or in the cloud. You can install OpenShift AI Self-Managed in a self-managed environment such as OpenShift Container Platform, or in Red Hat-managed cloud environments such as Red Hat OpenShift Dedicated (with a Customer Cloud Subscription for AWS or GCP), Red Hat OpenShift Service on Amazon Web Services (ROSA classic or ROSA HCP), or Microsoft Azure Red Hat OpenShift.

    For information about OpenShift AI as self-managed software on your OpenShift cluster in a connected or a disconnected environment, see Product Documentation for Red Hat OpenShift AI Self-Managed.

  • A managed cloud service, installed as an add-on in Red Hat OpenShift Dedicated (with a Customer Cloud Subscription for AWS or GCP) or in Red Hat OpenShift Service on Amazon Web Services (ROSA classic).

For information about OpenShift AI supported software platforms, components, and dependencies, see the Red Hat OpenShift AI: Supported Configurations Knowledgebase article.

For a detailed view of the release lifecycle, including the full support phase window, see the Red Hat OpenShift AI Cloud Service Life Cycle Knowledgebase article.

Chapter 2. New features and enhancements

This section describes new features and enhancements in Red Hat OpenShift AI.

2.1. New features

Support added to TrustyAI for KServe InferenceLogger integration

TrustyAI now provides support for KServe inference deployments through automatic InferenceLogger configuration.

Both KServe Raw and Serverless are supported and deployment mode is automatically detected using the InferenceService annotations.

Enhanced workload scheduling with Kueue

OpenShift AI now provides enhanced workload scheduling with the Red Hat build of Kueue. Kueue is a job-queuing system that provides resource-aware scheduling for workloads, improving GPU utilization and ensuring fair resource sharing with intelligent, quota-based scheduling across AI workloads.

This feature expands Kueue’s workload support in OpenShift AI to include workbenches (Notebook) and model serving (InferenceService), in addition to the previously supported distributed training jobs (RayJob, RayCluster, PyTorchJob).

A validating webhook now handles queue enforcement. This webhook ensures that in any project enabled for Kueue management (with the kueue.openshift.io/managed=true label), all supported workloads must specify a target LocalQueue (with the kueue.x-k8s.io/queue-name label). This replaces the Validating Admission Policy used in previous versions.

For more information, see Managing workloads with Kueue.

Support added to view git commit hash in an image
You can now view the git commit hash in an image. This feature allows you to determine if the image has changed, even if the version number stays the same. You can also trace the image back to the source code if needed.
Support added for data science pipelines with Elyra
When using data science pipelines with Elyra, you now have the option to use a service-based URL rather than a route-based URL. Your data science pipeline can be used from the service directly by including the port number.
Workbench images mirrored by default

The latest version of workbench images is mirrored by default when you mirror images to a private registry for a disconnected installation. As an administrator, you can mirror older versions of workbench images through the additionalImages field in the Disconnected Helper configuration.

Important

Only the latest version of workbench images is supported with bug fixes and security updates. Older versions of workbench images are available, but they do not receive bug fixes or security updates.

2.2. Enhancements

Support for serving models from a Persistent Volume Claim (PVC)
Red Hat OpenShift AI now supports serving models directly from existing cluster storage. With this feature, you can serve models from pre-existing persistent volume claim (PVC) locations and create new PVCs for model storage within the interface.
New option to disable caching for all pipelines in a project

You can now disable caching for all data science pipelines in the pipeline server, which overrides all pipeline and task-level caching settings. This global setting is useful for scenarios such as debugging, development, or cases that require deterministic re-execution.

This option is configurable with the Allow caching to be configured per pipeline and task checkbox when you create or edit a pipeline server. Cluster administrators can also configure this spec.apiServer.cacheEnabled option. By default, this field is set to true. To disable caching cluster-wide, set this field to false. For more information, see Overview of data science pipelines caching.

Migration of production images from Quay to Red Hat Registry
RHOAI production images that fall under the current support model have been migrated from Quay to Red Hat Registry. They will continue to receive updates as defined by their release channel. Previously released images will remain in Quay.
JupyterLab version updated
JupyterLab is updated from version 4.2 to 4.4. This update includes a Move to Trash dropdown option when you right click on a folder, as well as other bug fixes and enhancements.
Updated vLLM component versions

OpenShift AI supports the following vLLM versions for each listed component:

  • vLLM CUDA: v0.10.0.2
  • vLLM ROCm: v0.10.0.2
  • vLLM Gaudi: v0.10.0.2
  • vLLM Power/Z: v0.10.0.2
  • Openvino Model Server: v2025.2.1

For more information, see vllm in GitHub.

Chapter 3. Technology Preview features

Important

This section describes Technology Preview features in Red Hat OpenShift AI. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Build Generative AI Apps with Llama Stack on OpenShift AI

With this release, the Llama Stack Technology Preview feature on OpenShift AI enables Retrieval-Augmented Generation (RAG) and agentic workflows for building next-generation generative AI applications. It supports remote inference, built-in embeddings, and vector database operations. It also integrates with providers like TrustyAI’s provider for safety and Trusty AI’s LM-Eval provider for evaluation.

This preview includes tools, components, and guidance for enabling the Llama Stack Operator, interacting with the RAG Tool, and automating PDF ingestion and keyword search capabilities to enhance document discovery.

Centralized platform metrics and tracing
Centralized platform metrics and tracing are now available as a Technology Preview feature in OpenShift AI. This feature enables integration with the Cluster Observability Operator (COO), Red Hat build of OpenTelemetry, and Tempo Operator, and provides optional out-of-the-box observability configurations for OpenShift AI. It also introduces a dedicated observability stack. Future releases will collect infrastructure and customer workload signals in the dedicated observability stack.
Support for Llama Stack Distribution version 0.2.17

The Llama Stack Distribution now includes Llama-stack version 0.2.17 as Technology Preview. This feature brings a number of capabilities, including:

  • Model providers: Self-hosted providers like vLLM are now automatically registered, so you no longer need to manually set INFERENCE_MODEL variables.
  • Infrastructure and backends: Improved the OpenAI inference and added support for the Vector Store API.
  • Error handling: Errors are now standardized, and library client initialization has been improved.
  • Access Control: The Vector Store and File APIs now enforce access control, and telemetry read APIs are gated by user roles.
  • Bug fixes.
Support for IBM Power accelerated Triton Inference Server

You can now enable Power architecture support for Triton inference server (CPU only) with Python and ONNX backend. You can deploy Triton inference server as a custom model serving runtime on IBM Power architecture as a Technology Preview in Red Hat OpenShift AI.

For details, see Triton Inference Server image.

Support for Kubernetes Event-driven Autoscaling (KEDA)

OpenShift AI now supports Kubernetes Event-driven Autoscaling (KEDA) in its standard deployment mode. This Technology Preview feature enables metrics-based autoscaling for inference services, allowing for more efficient management of accelerator resources, reduced operational costs, and improved performance for your inference services.

To set up autoscaling for your inference service in standard deployments, you need to install and configure the OpenShift Custom Metrics Autoscaler (CMA), which is based on KEDA.

For more information about this feature, see: Configuring metrics-based autoscaling.

LM-Eval model evaluation UI feature
TrustyAI now offers a user-friendly UI for LM-Eval model evaluations as Technology Preview. This feature allows you to input evaluation parameters for a given model and returns an evaluation-results page, all from the UI.
Use Guardrails Orchestrator with LlamaStack

You can now run detections using the Guardrails Orchestrator tool from TrustyAI with Llama Stack as a Technology Preview feature, using the built-in detection component. To use this feature, ensure TrustyAI is enabled, the FMS Orchestrator and detectors are set up, and KServe RawDeployment mode is in use for full compatibility if needed. There is no manual set up required.

Then, in the DataScienceCluster custom resource for the Red Hat OpenShift AI Operator, set the spec.llamastackoperator.managementState field to Managed.

For more information, see the following resources on GitHub:

Define and manage pipelines with Kubernetes API

You can now define and manage data science pipelines and pipeline versions by using the Kubernetes API, which stores them as custom resources in the cluster instead of the internal database. This Technology Preview feature makes it easier to use OpenShift GitOps (Argo CD) or similar tools to manage pipelines, while still allowing you to manage them through the OpenShift AI user interface, API, and kfp SDK.

This option, enabled by default, is configurable with the Store pipeline definitions in Kubernetes checkbox when you create or edit a pipeline server. Cluster administrators can also configure this option by setting the spec.apiServer.pipelineStore field to kubernetes or database in the DataSciencePipelinesApplication (DSPA) custom resource. For more information, see Defining a pipeline by using the Kubernetes API.

Model customization with LAB-tuning

LAB-tuning is now available as a Technology Preview feature, enabling data scientists to run an end-to-end workflow for customizing large language models (LLMs). The LAB (Large-scale Alignment for chatBots) method offers a more efficient alternative to traditional fine-tuning by leveraging taxonomy-guided synthetic data generation (SDG) and a multi-phase training approach.

Data scientists can run LAB-tuning workflows directly from the OpenShift AI dashboard by using the new preconfigured InstructLab pipeline, which simplifies the tuning process. For details on enabling and using LAB-tuning, see Enabling LAB-tuning and Customizing models with LAB-tuning.

Important

The LAB-tuning feature is not currently supported for disconnected environments.

Red Hat OpenShift AI Model Catalog

The Red Hat OpenShift AI Model Catalog is now available as a Technology Preview feature. This functionality starts with connecting users with the Granite family of models, as well as the teacher and judge models used in LAB-tuning.

Note

The model catalog feature is not currently supported for disconnected environments.

New Feature Store component

You can now install and manage Feature Store as a configurable component in the Red Hat OpenShift AI Operator. Based on the open-source Feast project, Feature Store acts as a bridge between ML models and data, enabling consistent and scalable feature management across the ML lifecycle.

This Technology Preview release introduces the following capabilities:

  • Centralized feature repository for consistent feature reuse
  • Python SDK and CLI for programmatic and command-line interactions to define, manage, and retrieve features for ML models
  • Feature definition and management
  • Support for a wide range of data sources
  • Data ingestion via feature materialization
  • Feature retrieval for both online model inference and offline model training
  • Role-Based Access Control (RBAC) to protect sensitive features
  • Extensibility and integration with third-party data and compute providers
  • Scalability to meet enterprise ML needs
  • Searchable feature catalog
  • Data lineage tracking for enhanced observability

    For configuration details, see Configuring Feature Store.

Enable targeted deployment of workbenches to specific worker nodes in Red Hat OpenShift AI Dashboard using node selectors

Hardware profiles are now available as a Technology Preview. The hardware profiles feature enables users to target specific worker nodes for workbenches or model-serving workloads. It allows users to target specific accelerator types or CPU-only nodes.

This feature replaces the current accelerator profiles feature and container size selector field, offering a broader set of capabilities for targeting different hardware configurations. While accelerator profiles, taints, and tolerations provide some capabilities for matching workloads to hardware, they do not ensure that workloads land on specific nodes, especially if some nodes lack the appropriate taints.

The hardware profiles feature supports both accelerator and CPU-only configurations, along with node selectors, to enhance targeting capabilities for specific worker nodes. Administrators can configure hardware profiles in the settings menu. Users can select the enabled profiles using the UI for workbenches, model serving, and Data Science Pipelines where applicable.

RStudio Server workbench image

With the RStudio Server workbench image, you can access the RStudio IDE, an integrated development environment for R. The R programming language is used for statistical computing and graphics to support data analysis and predictions.

To use the RStudio Server workbench image, you must first build it by creating a secret and triggering the BuildConfig, and then enable it in the OpenShift AI UI by editing the rstudio-rhel9 image stream. For more information, see Building the RStudio Server workbench images.

Important

Disclaimer: Red Hat supports managing workbenches in OpenShift AI. However, Red Hat does not provide support for the RStudio software. RStudio Server is available through rstudio.org and is subject to their licensing terms. You should review their licensing terms before you use this sample workbench.

CUDA - RStudio Server workbench image

With the CUDA - RStudio Server workbench image, you can access the RStudio IDE and NVIDIA CUDA Toolkit. The RStudio IDE is an integrated development environment for the R programming language for statistical computing and graphics. With the NVIDIA CUDA toolkit, you can enhance your work by using GPU-accelerated libraries and optimization tools.

To use the CUDA - RStudio Server workbench image, you must first build it by creating a secret and triggering the BuildConfig, and then enable it in the OpenShift AI UI by editing the rstudio-rhel9 image stream. For more information, see Building the RStudio Server workbench images.

Important

Disclaimer: Red Hat supports managing workbenches in OpenShift AI. However, Red Hat does not provide support for the RStudio software. RStudio Server is available through rstudio.org and is subject to their licensing terms. You should review their licensing terms before you use this sample workbench.

The CUDA - RStudio Server workbench image contains NVIDIA CUDA technology. CUDA licensing information is available in the CUDA Toolkit documentation. You should review their licensing terms before you use this sample workbench.

Model Registry
OpenShift AI now supports the Model Registry Operator. The Model Registry Operator is not installed by default in Technology Preview mode. The model registry is a central repository that contains metadata related to machine learning models from inception to deployment.
Support for multinode deployment of very large models
Serving models over multiple graphical processing unit (GPU) nodes when using a single-model serving runtime is now available as a Technology Preview feature. Deploy your models across multiple GPU nodes to improve efficiency when deploying large models such as large language models (LLMs). For more information, see Deploying models across multiple GPU nodes.

Chapter 4. Developer Preview features

Important

This section describes Developer Preview features in Red Hat OpenShift AI.

Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to functionality in advance of possible inclusion in a Red Hat product offering. Customers can use these features to test functionality and provide feedback during the development process. Developer Preview features might not have any documentation, are subject to change or removal at any time, and have received limited testing. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.

For more information about the support scope of Red Hat Developer Preview features, see Developer Preview Support Scope.

Distributed Inference Server for LLMs
Distributed Inference Server (vLLM with Distributed Routing) is now available as a Developer Preview feature. Distributed Inference Server supports multi-model serving, intelligent inference scheduling, and disaggregated serving for improved GPU utilization on GenAI models.

For more information, see Deploying a model by using the LLM Inference Service (LLM-D).

Run evaluations for TrustyAI-Llama Stack using LM-Eval

You can now run evaluations using LM-Eval on Llama Stack with TrustyAI as a Developer Preview feature, using the built-in LM-Eval component and advanced content moderation tools. To use this feature, ensure TrustyAI is enabled, the FMS Orchestrator and detectors are set up, and KServe RawDeployment mode is in use for full compatibility if needed. There is no manual set up required.

Then, in the DataScienceCluster custom resource for the Red Hat OpenShift AI Operator, set the spec.llamastackoperator.managementState field to Managed.

For more information, see the following resources on GitHub:

LLM Compressor integration

LLM Compressor capabilities are now available in Red Hat OpenShift AI as a Developer Preview feature. A new workbench image with the llm-compressor library and a corresponding data science pipelines runtime image make it easier to compress and optimize your large language models (LLMs) for efficient deployment with vLLM. For more information, see llm-compressor in GitHub.

You can use LLM Compressor capabilities in two ways:

Support for AppWrapper in Kueue
AppWrapper support in Kueue is available as a Developer Preview feature. The experimental API enables the use of AppWrapper-based workloads with the distributed workloads feature.

Chapter 5. Support removals

This section describes major changes in support for user-facing features in Red Hat OpenShift AI. For information about OpenShift AI supported software platforms, components, and dependencies, see the Red Hat OpenShift AI: Supported Configurations Knowledgebase article.

5.1. Upcoming deprecations

5.1.1. Upcoming deprecation of LAB-tuning

The LAB-tuning feature, currently available as a Technology Preview, is planned for deprecation in a later release. If you are using LAB-tuning for large language model customization, plan to migrate to alternative fine-tuning or model customization methods as they become available.

5.2. Deprecated

5.2.1. Deprecated CodeFlare Operator

Starting with OpenShift AI 2.24, the CodeFlare Operator is deprecated and will be removed in a future release of OpenShift AI.

5.2.2. Deprecated embedded Kueue component

Starting with OpenShift AI 2.24, the embedded Kueue component for managing distributed workloads is deprecated. OpenShift AI now uses the Red Hat Build of Kueue Operator to provide enhanced workload scheduling across distributed training, workbench, and model serving workloads. The deprecated embedded Kueue component is not supported in any Extended Update Support (EUS) release. To ensure workloads continue using queue management, you must migrate from the embedded Kueue component to the Red Hat Build of Kueue Operator, which requires OpenShift Container Platform 4.18 or later. To migrate, complete the following steps:

  1. Install the Red Hat Build of Kueue Operator from OperatorHub.
  2. Edit your DataScienceCluster custom resource to set the spec.components.kueue.managementState field to Unmanaged.
  3. Verify that existing Kueue configurations (ClusterQueue and LocalQueue) are preserved after migration.

For detailed instructions, see Migrating to the Red Hat build of Kueue Operator.

Note

This deprecation does not affect the Red Hat OpenShift AI API tiers.

5.2.3. Multi-model serving platform (ModelMesh)

Starting with OpenShift AI version 2.19, the multi-model serving platform based on ModelMesh is deprecated. You can continue to deploy models on the multi-model serving platform, but it is recommended that you migrate to the single-model serving platform.

For more information or for help on using the single-model serving platform, contact your account manager.

Starting with OpenShift AI version 2.19, the Text Generation Inference Server (TGIS) is deprecated. TGIS will continue to be supported through the OpenShift AI 2.16 EUS lifecycle. Caikit-TGIS and Caikit are not affected and will continue to be supported. The out-of-the-box serving runtime template will no longer be deployed. vLLM is recommended as a replacement runtime for TGIS.

5.2.5. Deprecated accelerator profiles

Accelerator profiles are now deprecated. To target specific worker nodes for workbenches or model serving workloads, use hardware profiles.

The CUDA plugin for the OpenVINO Model Server (OVMS) is now deprecated and will no longer be available in future releases of OpenShift AI.

Previously, cluster administrators used the groupsConfig option in the OdhDashboardConfig resource to manage the OpenShift groups (both administrators and non-administrators) that can access the OpenShift AI dashboard. Starting with OpenShift AI 2.17, this functionality has moved to the Auth resource. If you have workflows (such as GitOps workflows) that interact with OdhDashboardConfig, you must update them to reference the Auth resource instead.

Expand
Table 5.1. Updated configurations
Resource2.16 and earlier2.17 and later versions

apiVersion

opendatahub.io/v1alpha

services.platform.opendatahub.io/v1alpha1

kind

OdhDashboardConfig

Auth

name

odh-dashboard-config

auth

Admin groups

spec.groupsConfig.adminGroups

spec.adminGroups

User groups

spec.groupsConfig.allowedGroups

spec.allowedGroups

5.2.8. Deprecated cluster configuration parameters

When using the CodeFlare SDK to run distributed workloads in Red Hat OpenShift AI, the following parameters in the Ray cluster configuration are now deprecated and should be replaced with the new parameters as indicated.

Expand
Deprecated parameterReplaced by

head_cpus

head_cpu_requests, head_cpu_limits

head_memory

head_memory_requests, head_memory_limits

min_cpus

worker_cpu_requests

max_cpus

worker_cpu_limits

min_memory

worker_memory_requests

max_memory

worker_memory_limits

head_gpus

head_extended_resource_requests

num_gpus

worker_extended_resource_requests

You can also use the new extended_resource_mapping and overwrite_default_resource_mapping parameters, as appropriate. For more information about these new parameters, see the CodeFlare SDK documentation (external).

5.3. Removed functionality

Starting with OpenShift AI 2.24, the Microsoft SQL Server command-line tools (sqlcmd, bcp) have been removed from workbenches. You can no longer manage Microsoft SQL Server using the preinstalled command-line client.

Starting with OpenShift AI 2.23, the ML Metadata (MLMD) server has been removed from the model registry component. The model registry now interacts directly with the underlying database by using the existing model registry API and database schema. This change simplifies the overall architecture and ensures the long-term maintainability and efficiency of the model registry by transitioning from the ml-metadata component to direct database access within the model registry itself.

If you see the following error for your model registry deployment, this means that your database schema migration has failed:

error: error connecting to datastore: Dirty database version {version}. Fix and force version.
Copy to Clipboard Toggle word wrap

You can fix this issue by manually changing the database from a dirty state to 0 before traffic can be routed to the pod. Perform the following steps:

  1. Find the name of your model registry database pod as follows:

    kubectl get pods -n <your-namespace> | grep model-registry-db

    Replace <your-namespace> with the namespace where your model registry is deployed.

  2. Use kubectl exec to run the query on the model registry database pod as follows:

    kubectl exec -n <your-namespace> <your-db-pod-name> -c mysql -- mysql -u root -p"$MYSQL_ROOT_PASSWORD" -e "USE <your-db-name>; UPDATE schema_migrations SET dirty = 0;"

    Replace <your-namespace> with your model registry namespace and <your-db-pod-name> with the pod name that you found in the previous step. Replace <your-db-name> with your model registry database name.

    This will reset the dirty state in the database, allowing the model registry to start correctly.

5.3.3. Anaconda removal

Anaconda is an open source distribution of the Python and R programming languages. Starting with OpenShift AI version 2.18, Anaconda is no longer included in OpenShift AI, and Anaconda resources are no longer supported or managed by OpenShift AI.

If you previously installed Anaconda from OpenShift AI, a cluster administrator must complete the following steps from the OpenShift command-line interface to remove the Anaconda-related artifacts:

  1. Remove the secret that contains your Anaconda password:

    oc delete secret -n redhat-ods-applications anaconda-ce-access

  2. Remove the ConfigMap for the Anaconda validation cronjob:

    oc delete configmap -n redhat-ods-applications anaconda-ce-validation-result

  3. Remove the Anaconda image stream:

    oc delete imagestream -n redhat-ods-applications s2i-minimal-notebook-anaconda

  4. Remove the Anaconda job that validated the downloading of images:

    oc delete job -n redhat-ods-applications anaconda-ce-periodic-validator-job-custom-run

  5. Remove any pods related to Anaconda cronjob runs:

    oc get pods n redhat-ods-applications --no-headers=true | awk '/anaconda-ce-periodic-validator-job-custom-run*/'

5.3.4. Data science pipelines v1 support removed

Previously, data science pipelines in OpenShift AI were based on KubeFlow Pipelines v1. Data science pipelines are now based on KubeFlow Pipelines v2, which uses a different workflow engine. Data science pipelines 2.0 is enabled and deployed by default in OpenShift AI.

Data science pipelines 1.0 resources are no longer supported or managed by OpenShift AI. It is no longer possible to deploy, view, or edit the details of pipelines that are based on data science pipelines 1.0 from either the dashboard or the KFP API server.

OpenShift AI does not automatically migrate existing data science pipelines 1.0 instances to 2.0. If you want to use existing pipelines and workbenches with data science pipelines 2.0 after upgrading OpenShift AI, you must update your workbenches to use the 2024.1 workbench image version or later and then manually migrate your pipelines from data science pipelines 1.0 to 2.0. For more information, see Migrating to data science pipelines 2.0.

Important

Data science pipelines 2.0 contains an installation of Argo Workflows. Red Hat does not support direct customer use of this instance of Argo Workflows. To install or upgrade to OpenShift AI with data science pipelines 2.0, ensure that there is no existing installation of Argo Workflows on your cluster.

Logs are no longer stored in S3-compatible storage for Python scripts which are running in Elyra pipelines. From OpenShift AI version 2.11, you can view these logs in the pipeline log viewer in the OpenShift AI dashboard.

Note

For this change to take effect, you must use the Elyra runtime images provided in workbench images at version 2024.1 or later.

If you have an older workbench image version, update the Version selection field to a compatible workbench image version, for example, 2024.1, as described in Updating a project workbench.

Updating your workbench image version will clear any existing runtime image selections for your pipeline. After you have updated your workbench version, open your workbench IDE and update the properties of your pipeline to select a runtime image.

Previously, to enable graphics processing units (GPUs) to help with compute-heavy workloads, you installed the NVIDIA GPU add-on. OpenShift AI no longer supports this add-on.

Now, to enable GPU support, you must install the NVIDIA GPU Operator. To learn how to install the GPU Operator, see NVIDIA GPU Operator on Red Hat OpenShift Container Platform (external).

In OpenShift AI 1.15 and earlier, JupyterHub was used to create and launch basic workbenches. In OpenShift AI 1.16 and later, JupyterHub is no longer included, and its functionality is replaced by Kubeflow Notebook Controller.

This change provides the following benefits:

  • Users can now immediately cancel a request, make changes, and retry the request, instead of waiting 5+ minutes for the initial request to time out. This means that users do not wait as long when requests fail, for example, when a basic workbench does not start correctly.
  • The architecture no longer prevents a single user from having more than one basic workbench session, expanding future feature possibilities.
  • The removal of the PostgreSQL database requirement allows for future expanded environment support in OpenShift AI.

However, this update also creates the following behavior changes:

  • For cluster administrators, the administration interface for basic workbenches does not currently allow login access to data scientist users' workbenches. This is planned to be added in future releases.
  • For data scientists, the JupyterHub interface URL is no longer valid. Update your bookmarks to point to the OpenShift AI Dashboard.

The JupyterLab interface is unchanged and data scientists can continue to use JupyterLab to work with their Jupyter notebook files as usual.

5.3.8. HabanaAI workbench image removal

Support for the HabanaAI 1.10 workbench image has been removed. New installations of OpenShift AI from version 2.14 do not include the HabanaAI workbench image. However, if you upgrade OpenShift AI from a previous version, the HabanaAI workbench image remains available, and existing HabanaAI workbench images continue to function.

Chapter 6. Resolved issues

The following notable issues are resolved in Red Hat OpenShift AI.

OCPBUGS-44432 - ImageStream unable to import image tags in a disconnected OpenShift environment

Before this update, if you used the ImageTagMirrorSet (ITMS) or ImageDigestMirrorSet (IDMS) in a disconnected OpenShift environment, the ImageStream resource prevented the mirror from importing the image, and a RHOAI workbench instance could not be created. This issue is now resolved in OpenShift Container Platform 4.19.13 or later. Update your OpenShift instances to 4.19.13 or later to avoid this issue.

RHOAIENG-29729 - Model registry Operator in a restart loop after upgrade

After upgrading from OpenShift AI 2.22 or earlier with the model registry component enabled, the model registry Operator could enter a restart loop. This was due to an insufficient memory limit for the manager container in the model-registry-operator-controller-manager pod. This issue is now resolved.

RHOAIENG-31248 - KServe http: TLS handshake error

Previously, the OpenShift CA auto-injection in the localmodelcache validation webhook configuration was missing the necessary annotation, leading to repeated TLS handshake errors. This issue is now resolved.

RHOAIENG-31376 - Inference service creation using vLLM runtime fails on IBM Power cluster

Previously, when you attempted to create an inference service using the vLLM runtime on an IBM Power cluster, it failed with the following error: OpNamespace' '_C_utils' object has no attribute 'init_cpu_threads_env error. This issue is now resolved.

RHOAIENG-31377 - Inference service creation fails on IBM Power cluster

Previously, when you attempted to create an inference service using the vLLM runtime on an IBM Power cluster, it failed with the following error: ValueError: 'aimv2' is already used by a Transformers config, pick another name. This issue is now resolved.

RHOAIENG-31498 - Incorrect inference URL in LlamaStack LMEval provider

Before this update, when you ran evaluations on Llama Stack using the LMEval provider, the evaluation jobs erroneously used the model server endpoint as v1/openai/v1/completions. This resulted in a job failure because the correct model server endpoint was v1/completions. This issue is now resolved.

RHOAIENG-31536 - Prometheus configuration not reconciled properly

Before this update, the Monitoring resource did not reconcile properly and showed a "Not Ready" status when upgrading to or installing 2.23. This issue occurred because the resource required the OpenTelemetry and Cluster Observability Operators to be installed, even if no new monitoring or tracing configurations were added to the DSCInitialization resource. As a result, Prometheus configuration did not reconcile and led to empty or outdated alert configurations. This issue is now resolved.

RHOAIENG-4148 - Standalone notebook fails to start due to character length

Previously, the notebook controller logic did not proactively check username lengths before it attempted to create resources. The notebook controller creates OpenShift resources using your username directly. As a result, if the combined name of the OpenShift Route and namespace exceeded the 63-character limit for DNS subdomains, the creation of the OpenShift Route failed with the following validation error: spec.host: ... must be no more than 63 characters. Without the Route, the dependent OAuthClient could not be configured, and workbenches could not start.

With this release, the notebook controller’s logic has been updated to proactively check name character lengths before creating resources. For Routes, if the combined length of the notebook name and namespace would exceed the 63-character limit, the controller now creates the Route using the generateName field with a prefix of nb-. For StatefulSets, if the notebook name is longer than 52 characters, the controller also uses generateName: "nb-" to prevent naming conflicts.

RHOAIENG-3913 - Red Hat OpenShift AI Operator incorrectly shows Degraded condition of False with an error

Previously, if you had enabled the KServe component in the DataScienceCluster (DSC) object used by the OpenShift AI Operator, but had not installed the dependent Red Hat OpenShift Service Mesh and Red Hat OpenShift Serverless Operators, the kserveReady condition in the DSC object correctly showed that KServe is not ready. However, the Degraded condition incorrectly showed a value of False. This issue is now resolved.

RHOAIENG-29352 - Missing Documentation and Support menu items

Previously, in the OpenShift AI top navigation bar, when you clicked the help icon ( Help icon ), the menu contained only the About menu item and the Documentation and Support menu items were missing. This issue is now resolved.

RHAIENG-496 - Error creating LlamaStackDistribution as a non-administrator user

Previously, non-administrator requests failed due to insufficient role-based access control (RBAC) as the deployed role definitions were outdated or incomplete for the current Llama Stack resources (for example, the LlamaStackDistribution CRD). This issue is now resolved.

RHOAIENG-27676 - Accelerator profile does not work correctly with deleted case

If you deleted your accelerator profile after creating a workbench, deployment, or model server, the Edit page did not use existing settings and showed the wrong accelerator profile. This issue is now resolved.

RHOAIENG-25733 - Accelerator profile does not work correctly with duplicate name

When you created a workbench, deployment, or model and used the same name for the project-scoped Accelerator profile as the global-scoped Accelerator profile, the Edit page and server form displayed incorrect labels in the respective tables and form.

RHOAIENG-26537 - Users cannot access the dashboard after installing OpenShift AI 2.21

After you installed OpenShift AI 2.21 and created a DataScienceCluster on a new cluster, you could not access the dashboard because the Auth custom resource was created without the default group configuration. This issue is now resolved.

RHOAIENG-26464 - InstructLab training phase1 pods restart when using default value due to insufficient memory in RHOAI 2.21

When you ran the InstructLab pipeline using the default value for the train_memory_per_worker input parameter (100 GiB), the phase1 training task failed because of insufficient pod memory. This issue is now resolved.

RHOAIENG-26263 - Node selector not cleared when changing the hardware profile for a workbench or model deployment

If you edited an existing workbench or model deployment to change the hardware profile from one that included a node selector to one that did not, the previous node placement settings could not be removed. With this release, the issue is resolved.

RHOAIENG-26099 - Environment variable HTTP_PROXY and HTTPS_PROXY added to notebooks

Previously, the notebook controller injected a cluster-wide OpenShift Proxy configuration to all newly created and restarted workbenches. With this release, proxy configurations are not injected unless a cluster administrator enables proxy configuration through the ConfigMap.

To enable proxy configuration, run the following command:

$ oc create configmap notebook-controller-setting-config --from-literal=INJECT_CLUSTER_PROXY_ENV=true -n redhat-ods-applications
Copy to Clipboard Toggle word wrap
Important

Any change to the config map INJECT_CLUSTER_PROXY_ENV key is propagated only after the odh-notebook-controller pod is recreated. To update the behavior, you need to either delete the relevant pod or perform a deployment rollout.

To delete the pod, run the following command:

$ oc delete pod -l app=odh-notebook-controller -A
Copy to Clipboard Toggle word wrap

To perform a deployment rollout, run the following command:

$ oc rollout restart -n redhat-ods-applications deployment/odh-notebook-controller-manager
Copy to Clipboard Toggle word wrap

RHOAIENG-23475 - Inference requests on IBM Power in a disconnected environment fail with a timeout error

Previously, when you used the IBM Power architecture to send longer prompts of more than 100 input tokens to the inference service, there was no response from the inference service. With this release, the issue is resolved.

RHOAIENG-20595 - Pipelines tasks fail to run when defining an http_proxy environment variable

The pipeline tasks failed to run if you attempted to set the http_proxy or https_proxy environment variables in a pipeline task. With this release, the issue is resolved.

RHOAIENG-16568 - Unable to download notebook as a PDF from JupyterLab Workbenches

Previously, you could not download a notebook as a PDF file in Jupyter. With this release, the issue is resolved.

RHOAIENG-14271 - Compatibility errors occur when using different Python versions in Ray clusters with Jupyter notebooks

Previously, when you used Python version 3.11 in a Jupyter notebook and then created a Ray cluster, the cluster defaulted to a workbench image that contained both Ray version 2.35 and Python version 3.9, which caused compatibility errors. With this release, the issue is resolved.

RHOAIENG-7947 - Model serving fails during query in KServe

Previously, if you initially installed the ModelMesh component and enabled the multi-model serving platform, but later installed the KServe component and enable the single-model serving platform, inference requests to models deployed on the single-model serving platform could have failed. This issue no longer occurs.

RHOAIENG-580 (previously documented as RHODS-9412) - Elyra pipeline fails to run if workbench is created by a user with edit permissions

If you were granted edit permissions for a project and created a project workbench, you saw the following behavior:

  • During the workbench creation process, you received an Error creating workbench message related to the creation of Kubernetes role bindings.
  • Despite the preceding error message, OpenShift AI still created the workbench. However, the error message meant that you were not able to use the workbench to run Elyra data science pipelines.
  • If you tried to use the workbench to run an Elyra pipeline, Jupyter showed an Error making request message that described failed initialization.

    With this release, these issues are resolved.

RHOAIENG-24682 - [vLLM-Cuda] Unable to deploy model on FIPS enabled cluster

Previously, if you deployed a model by using the vLLM NVIDIA GPU ServingRuntime for KServe or vLLM ServingRuntime Multi-Node for KServe runtimes on NVIDIA accelerators in a FIPS-enabled cluster, the deployment could fail. This issue is now resolved.

RHOAIENG-23596 - Inference requests on IBM Power with longer prompts to the inference service fail with a timeout error

Previously, when using the IBM Power architecture to send longer prompts of more than 100 input tokens to the inference service, there was no response from the inference service. This issue no longer occurs.

RHOAIENG-24886 - Cannot deploy OCI model when Model URI field includes prefix

Previously, when deploying an OCI model, if you pasted the complete URI in the Model URI field and then moved the cursor to another field, the URL prefix (for example, http://) was removed from the Model URI field, but it was included in the storageUri value in the InferenceService resource. As a result, you could not deploy the OCI model. This issue is now resolved.

RHOAIENG-24104 - KServe reconciler should only deploy certain resources when Authorino is installed

Previously, when Authorino was not installed, Red Hat OpenShift AI applied the AuthorizationPolicy and EnvoyFilter resources to the KServe serverless deployment mode. This could block some inference requests. This issue is now resolved.

RHOAIENG-23562 - TrustyAIService TLS handshake error in FIPS clusters

Previously, when using a FIPS cluster that uses an external route to send a request to the TrustyAIService, a TLS handshake error appeared in the logs and the request was not processed. This issue is now resolved.

RHOAIENG-23169 - StorageInitializer fails to download models from Hugging Face repository

Previously, deploying models from Hugging Face in a KServe environment using the hf:// protocol failed when the cluster lacked built-in support for this protocol. Additionally, the storage initializer InitContainer in KServe could encounter a PermissionError error because of insufficient write permissions in the default cache directory (/.cache). This issue is now resolved.

RHOAIENG-22965 - Data science pipeline task fails when optional input parameters are not set

Previously, when a pipeline had optional input parameters, creating a pipeline run and tasks with unset parameters failed with the following error:

failed: failed to resolve inputs: resolving input parameter optional_input with spec component_input_parameter:"parameter_name": parent DAG does not have input parameter
Copy to Clipboard Toggle word wrap

This issue also affected the InstructLab pipeline. This issue is now resolved.

RHOAIENG-22439 - cuda-rstudio-rhel9 cannot be built

Previously, when building the RStudio Server workbench images, building the cuda-rstudio-rhel9 failed with the following error:

Package supervisor-4.2.5-6.el9.noarch is already installed.
Dependencies resolved.
Nothing to do.
Complete!
.M...U...    /run/supervisor
error: build error: building at STEP "RUN yum -y module enable nginx:$NGINX_VERSION &&     INSTALL_PKGS="nss_wrapper bind-utils gettext hostname nginx nginx-mod-stream nginx-mod-http-perl fcgiwrap initscripts chkconfig supervisor" &&     yum install -y --setopt=tsflags=nodocs $INSTALL_PKGS &&     rpm -V $INSTALL_PKGS &&     nginx -v 2>&1 | grep -qe "nginx/$NGINX_VERSION\." && echo "Found VERSION $NGINX_VERSION" &&     yum -y clean all --enablerepo='*'": while running runtime: exit status 1
Copy to Clipboard Toggle word wrap

This issue is now resolved.

RHOAIENG-21274 - Connection type changes back to S3 or URI when deploying a model with an OCI connection type

Previously, if you deployed a model that was using the S3 or URI connection type in a project with no matching connections, the Create new connection section was pre-populated using data from the model location of the S3 or URI storage location. If you changed the connection type to OCI and entered a value in the Model URI field, the connection type changed back to S3 or URI. This issue is now resolved.

RHOAIENG-6486 - Pod labels, annotations, and tolerations cannot be configured when using the Elyra JupyterLab extension with the TensorFlow 2024.1 notebook image

Previously, using TensorFlow-based workbench images did not allow users to use pod labels, annotations, and tolerations when using the Elyra JupyterLab extension. With the 2025.1 images, the TensorFlow-based workbench is upgraded with the Kubeflow pipeline SDK (kfp). With the upgraded SDK, you can set pod labels, annotations, and tolerations when using the Elyra extension to schedule the Data Science pipelines.

RHOAIENG-21197 - Deployment failure when using vLLM runtime on AMD GPU accelerators in a FIPS-enabled cluster

Previously, when deploying a model by using the vLLM runtime on AMD GPU accelerators in a FIPS-enabled cluster, the deployment could fail. This issue is now resolved.

RHOAIENG-20245 - Certain model registry operations remove custom properties from the registered model and version

Previously, editing the description, labels, or properties of a model version removed labels and custom properties from the associated model. Deploying a model version, or editing its model source format, removed labels and custom properties from the version and from the associated model. This issue is now resolved.

RHOAIENG-19954 - Kueue alerts not monitored in OpenShift

Previously, in the OpenShift console, Kueue alerts were not monitored. The new ServiceMonitor resource rejected the usage of the BearerTokenFile field, which meant that Prometheus did not have the required permissions to scrape the target. As a result, the Kueue alerts were not shown on the ObserveAlerting page, and the Kueue targets were not shown on the ObserveTargets page. This issue is now resolved.

RHOAIENG-19716 - The system-authenticated user group cannot be removed by using the dashboard

Previously, after installing or upgrading Red Hat OpenShift AI, the system-authenticated user group displayed in Settings > User management under Data science user groups. If you removed this user group from Data science user groups and saved the changes, the group was erroneously added again. This issue is now resolved.

RHOAIENG-18238 - Inference endpoints for deployed models return 403 error after upgrading the Authorino Operator

Previously, after upgrading the Authorino Operator, the automatic Istio sidecar injection may not have been reapplied. Without the sidecar, Authorino was not correctly integrated into the service mesh, and caused inference endpoint requests to fail with an HTTP 403 error. This issue is now resolved.

RHOAIENG-11371 - Incorrect run status reported for runs using ExitHandler

Previously, when using pipeline exit handlers (dsl.ExitHandler), if a task inside the handle failed but the exit task succeeded, the overall pipeline run status was inaccurately reported as Succeeded instead of Failed. This issue is now resolved.

RHOAIENG-16146 - Connection sometimes not preselected when deploying a model from model registry

Previously, when deploying a model from a model registry, the object storage connection (previously called data connection) might not have been preselected. This issue is now resolved.

RHOAIENG-21068 - InstructLab pipeline run cannot be created when the parameter sdg_repo_pr is left empty

Previously, when creating a pipeline run of the InstructLab pipeline, if the parameter sdg_repo_pr was left empty, the pipeline run could not be created and an error message appeared. This issue is now resolved.

RHOAIENG-19711 - Kueue-controller-manager uses old metrics port after upgrade from 2.16.0 to 2.17.0

Previously, after upgrading, the Kueue Operator continued to use the old port (8080) instead of the new port (8443) for metrics. As a result, the OpenShift console Observe > Targets page showed that the status of the Kueue Operator was Down. This issue is now resolved.

RHOAIENG-19261 - The TrustyAI installation might fail due to missing custom resource definitions (CRDs)

Previously, when installing or upgrading OpenShift AI, the TrustyAI installation might have failed due to missing InferenceService and ServingRuntime CRDs. As a result, the Trusty AI controller went into the CrashLoopBackOff state. This issue is now resolved.

RHOAIENG-18933 - Increased workbench image size can delay workbench startup

Previously, as a result of the presence of the kubeflow-training Python SDK in the 2024.2 workbench images, the workbench image size was increased and may have caused a delay when starting the workbench. This issue is now resolved.

RHOAIENG-18884 - Enabling NIM account setup is incomplete

Previously, when you tried to enable the NVIDIA NIM model serving platform, the odh-model-controller deployment started before the NIM account setup was complete. As a result, the NIM account setup was incomplete and the platform was not enabled. This issue is now resolved.

RHOAIENG-18675 - Workbenches component fails after upgrading

Previously, when upgrading to OpenShift AI 1, the workbench component did not upgrade correctly. Specifically, BuildConfigs and resources that follow it (for example, RStudio BuildConfigs and ROCm imagestreams) were not updated, which caused the workbench component reconciliation in the DataScienceCluster to fail. This issue is now resolved.

RHOAIENG-15123 (also documented as RHOAIENG-10790 and RHOAIENG-14265) - Pipelines schedule might fail after upgrading

Previously, when you upgraded to OpenShift AI 1, any data science pipeline scheduled runs that existed before the upgrade might fail to execute, resulting in an error message in the task pod. This issue is now resolved.

RHOAIENG-16900 - Space-separated format in serving-runtime arguments can cause deployment failure

Previously, when deploying models, using a space-separated format to specify additional serving runtime arguments could cause unrecognized arguments errors. This issue is now resolved.

RHOAIENG-16073 - Attribute error when retrieving the job client for a cluster object

Previously, when initializing a cluster with the get_cluster method, assigning client = cluster.job_client sometimes resulted in an AttributeError: 'Cluster' object has no attribute '_job_submission_client' error. This issue is now resolved.

RHOAIENG-15773 - Cannot add a new model registry user

Previously, when managing the permissions of a model registry, you could not add a new user, group, or project as described in Managing model registry permissions. An HTTP request failed error was displayed. This issue is now resolved.

RHOAIENG-14197 - Tooltip text for CPU and Memory graphs is clipped and therefore unreadable

Previously, when you hovered the cursor over the CPU and Memory graphs in the Top resource-consuming distributed workloads section on the Project metrics tab of the Distributed Workloads Metrics page, the tooltip text was clipped, and therefore unreadable. This issue is now resolved.

RHOAIENG-11024 - Resources entries get wiped out after removing opendatahub.io/managed annotation

Previously, manually removing the opendatahub.io/managed annotation from any component deployment YAML file might have caused resource entry values in the file to be erased. This issue is now resolved.

RHOAIENG-8102 - Incorrect requested resources reported when cluster has multiple cluster queues

Previously, when a cluster had multiple cluster queues, the resources requested by all projects was incorrectly reported as zero instead of the true value. This issue is now resolved.

RHOAIENG-16484 - vLLM server engine for Gaudi accelerators fails after a period of inactivity

Previously, when using the vLLM ServingRuntime with Gaudi accelerators support for KServe model-serving runtime on a cluster equipped with Gaudi hardware, the vLLM server could fail with a TimeoutError message after a period of inactivity where it was not processing continuous inference requests. This issue no longer occurs.

RHOAIENG-15033 - Model registry instances do not restart or update after upgrading OpenShift AI

Previously, when you upgraded OpenShift AI, existing instances of the model registry component were not updated, which caused the instance pods to use older images than the ones referenced by the operator pod. This issue is now resolved.

RHOAIENG-15008 - Error when creating a bias metric from the CLI without a request name

Previously, the user interface sometimes displayed an error message when you viewed bias metrics if the requestName parameter was not set. If you used the user interface to view bias metrics, but wanted to configure them through the CLI, you had to specify a requestName parameter within your payload. This issue is now resolved.

RHOAIENG-14986 - Incorrect package path causes copy_demo_nbs to fail

Previously, the copy_demo_nbs() function of the CodeFlare SDK failed because of an incorrect path to the SDK package. Running this function resulted in a FileNotFound error. This issue is now resolved.

RHOAIENG-14552 - Workbench or notebook OAuth proxy fails with FIPS on OpenShift Container Platform 4.16

Previously, when using OpenShift 4.16 or newer in a FIPS-enabled cluster, connecting to a running workbench failed because the connection between the internal component oauth-proxy and the OpenShift ingress failed with a TLS handshake error. When opening a workbench, the browser showed an "Application is not available" screen without any additional diagnostics. This issue is now resolved.

RHOAIENG-14095 - The dashboard is temporarily unavailable after the installing OpenShift AI Operator

Previously, after you installed the OpenShift AI Operator, the OpenShift AI dashboard was unavailable for approximately three minutes. As a result, a Cannot read properties of undefined page sometimes appeared. This issue is now resolved.

RHOAIENG-13633 - Cannot set a serving platform for a project without first deploying a model from outside of the model registry

Previously, you could not set a serving platform for a project without first deploying a model from outside of the model registry. You could not deploy a model from a model registry to a project unless the project already had single-model or multi-model serving selected. The only way to select single-model or multi-model serving from the OpenShift AI UI was to first deploy a model or model server from outside the registry. This issue is now resolved.

RHOAIENG-545 - Cannot specify a generic default node runtime image in JupyterLab pipeline editor

Previously, when you edited an Elyra pipeline in the JupyterLab IDE pipeline editor, and you clicked the PIPELINE PROPERTIES tab, and scrolled to the Generic Node Defaults section and edited the Runtime Image field, your changes were not saved. This issue is now resolved.

RHOAIENG-14571 - Data Science Pipelines API Server unreachable in managed IBM Cloud OpenShift OpenShift AI installation

Previously, when configuring a data science pipeline server, communication errors that prevented successful interaction with the pipeline server occurred. This issue is now resolved.

RHOAIENG-14195 - Ray cluster creation fails when deprecated head_memory parameter is used

Previously, if you included the deprecated head_memory parameter in your Ray cluster configuration, the Ray cluster creation failed. This issue is now resolved.

RHOAIENG-11895 - Unable to clone a GitHub repo in JupyterLab when configuring a custom CA bundle using |-

Previously, if you configured a custom Certificate Authority (CA) bundle in the DSCInitialization (DSCI) object using |-, cloning a repo from JupyterLab failed. This issue is now resolved.

RHOAIENG-1132 (previously documented as RHODS-6383) - An ImagePullBackOff error message is not displayed when required during the workbench creation process

Previously, pods experienced issues pulling container images from the container registry. When an error occurred, the relevant pod entered into an ImagePullBackOff state. During the workbench creation process, if an ImagePullBackOff error occurred, an appropriate message was not displayed. This issue is now resolved.

RHOAIENG-13327 - Importer component (dsl.importer) prevents pipelines from running

Pipelines could not run when using the data science pipelines importer component, dsl.importer. This issue is now resolved.

RHOAIENG-14652 - kfp-client unable to connect to the pipeline server on OpenShift Container Platform 4.16 and later

In OpenShift 4.16 and later FIPS clusters, data science pipelines were accessible through the OpenShift AI Dashboard. However, connections to the pipelines API server from the KFP SDK failed due to a TLS handshake error. This issue is now resolved.

RHOAIENG-10129 - Notebook and Ray cluster with matching names causes secret resolution failure

Previously, if you created a notebook and a Ray cluster that had matching names in the same namespace, one controller failed to resolve its secret because the secret already had an owner. This issue is now resolved.

RHOAIENG-7887 - Kueue fails to monitor RayCluster or PyTorchJob resources

Previously, when you created a DataScienceCluster CR with all components enabled, the Kueue component was installed before the Ray component and the Training Operator component. As a result, the Kueue component did not monitor RayCluster or PyTorchJob resources. When a user created RayCluster or PyTorchJob resources, Kueue did not control the admission of those resources. This issue is now resolved.

RHOAIENG-583 (previously documented as RHODS-8921 and RHODS-6373) - You cannot create a pipeline server or start a workbench when cumulative character limit is exceeded

When the cumulative character limit of a data science project name and a pipeline server name exceeded 62 characters, you were unable to successfully create a pipeline server. Similarly, when the cumulative character limit of a data science project name and a workbench name exceeded 62 characters, workbenches failed to start. This issue is now resolved.

Incorrect logo on dashboard after upgrading

Previously, after upgrading from OpenShift AI 2.11 to OpenShift AI 2.12, the dashboard could incorrectly display the Open Data Hub logo instead of the Red Hat OpenShift AI logo. This issue is now resolved.

RHOAIENG-11297 - Authentication failure after pipeline run

Previously, during the execution of a pipeline run, a connection error could occur due to a certificate authentication failure. This certificate authentication failure could be caused by the use of a multi-line string separator for customCABundle in the default-dsci object, which was not supported by data science pipelines. This issue is now resolved.

RHOAIENG-11232 - Distributed workloads: Kueue alerts do not provide runbook link

After a Kueue alert fires, the cluster administrator can click Observe → Alerting → Alerts and click the name of the alert to open its Alert details page. On the Alert details page, the Runbook section now provides a link to the appropriate runbook to help to diagnose and resolve the issues that triggered the alert. Previously, the runbook link was missing.

RHOAIENG-10665 - Unable to query Speculating with a draft model for granite model

Previously, you could not use speculative decoding on the granite-7b model and granite-7b-accelerator draft model. When querying these models, the queries failed with an internal error. This issue is now resolved.

RHOAIENG-9481 - Pipeline runs menu glitches when clicking action menu

Previously, when you clicked the action menu (⋮) next to a pipeline run on the Experiments > Experiments and runs page, the menu that appeared was not fully visible, and you had to scroll to see all of the menu items. This issue is now resolved.

RHOAIENG-8553 - Workbench created with custom image shows !Deleted flag

Previously, if you disabled the internal image registry on your OpenShift cluster and then created a workbench with a custom image that was imported by using the image tag, for example: quay.io/my-wb-images/my-image:tag, a !Deleted flag was shown in the Notebook image column on the Workbenches tab of the Data science projects page. If you stopped the workbench, you could not restart it. This issue is now resolved.

RHOAIENG-6376 - Pipeline run creation fails after setting pip_index_urls in a pipeline component to a URL that contains a port number and path

Previously, when you created a pipeline and set the pip_index_urls value for a component to a URL that contains a port number and path, compiling the pipeline code and then creating a pipeline run could result in an error. This issue is now resolved.

RHOAIENG-4240 - Jobs fail to submit to Ray cluster in unsecured environment

Previously, when running distributed data science workloads from Jupyter notebooks in an unsecured OpenShift cluster, a ConnectionError: Failed to connect to Ray error message might be shown. This issue is now resolved.

RHOAIENG-9670 - vLLM container intermittently crashes while processing requests

Previously, if you deployed a model by using the vLLM ServingRuntime for KServe runtime on the single-model serving platform and also configured tensor-parallel-size, depending on the hardware platform you used, the kserve-container container would intermittently crash while processing requests. This issue is now resolved.

RHOAIENG-8043 - vLLM errors during generation with mixtral-8x7b

Previously, some models, such as Mixtral-8x7b might have experienced sporadic errors due to a triton issue, such as FileNotFoundError:No such file or directory. This issue is now resolved.

RHOAIENG-2974 - Data science cluster cannot be deleted without its associated initialization object

Previously, you could not delete a DataScienceCluster (DSC) object if its associated DSCInitialization object (DSCI) did not exist. This issue has now been resolved.

RHOAIENG-1205 (previously documented as RHODS-11791) - Usage data collection is enabled after upgrade

Previously, the Allow collection of usage data option would activate whenever you upgraded OpenShift AI. Now, you no longer need to manually deselect the Allow collection of usage data option when you upgrade.

RHOAIENG-1204 (previously documented as ODH-DASHBOARD-1771) - JavaScript error during Pipeline step initializing

Previously, the pipeline Run details page stopped working when a run started. This issue has now been resolved.

RHOAIENG-582 (previously documented as ODH-DASHBOARD-1335) - Rename Edit permission to Contributor

On the Permissions tab for a project, the term Edit has been replaced with Contributor to more accurately describe the actions granted by this permission.

For a complete list of updates, see the Errata advisory.

RHOAIENG-8819 - ibm-granite/granite-3b-code-instruct model fails to deploy on single-model serving platform

Previously, if you tried to deploy the ibm-granite/granite-3b-code-instruct model on the single-model serving platform by using the vLLM ServingRuntime for KServe runtime, the model deployment would fail with an error. This issue is now resolved.

RHOAIENG-7209 - Error displays when setting the default pipeline root

Previously, if you tried to set the default pipeline root using the data science pipelines SDK or the OpenShift AI user interface, an error would appear. This issue is now resolved.

RHOAIENG-6711 - ODH-model-controller overwrites the spec.memberSelectors field in ServiceMeshMemberRoll objects

Previously, if you tried to add a project or namespace to a ServiceMeshMemberRoll resource using the spec.memberSelectors field of the ServiceMeshMemberRoll resource, the ODH-model-controller would overwrite the field. This issue is now resolved.

RHOAIENG-6649 - An error is displayed when viewing a model on a model server that has no external route defined

Previously, if you tried to use the dashboard to deploy a model on a model server that did not have external routes enabled, a t.components is undefined error message would appear while the model creation was in progress. This issue is now resolved.

RHOAIENG-3981 - In unsecured environment, the functionality to wait for Ray cluster to be ready gets stuck

Previously, when running distributed data science workloads from Jupyter notebooks in an unsecured OpenShift cluster, the functionality to wait for the Ray cluster to be ready before proceeding (cluster.wait_ready()) got stuck even when the Ray cluster was ready. This issue is now resolved.

RHOAIENG-2312 - Importing numpy fails in code-server workbench

Previously, if you tried to import numpy, your code-server workbench would fail. This issue is now resolved.

RHOAIENG-1197 - Cannot create pipeline due to the End date picker in the pipeline run creation page defaulting to NaN values when using Firefox on Linux

Previously, if you tried to create a pipeline with a scheduled recurring run using Firefox on Linux, enabling the End Date parameter would result in Not a number (Nan) values for both the date and time. This issue is now resolved.

RHOAIENG-1196 (previously documented as ODH-DASHBOARD-2140) - Package versions displayed in dashboard do not match installed versions

Previously, the dashboard would display inaccurate version numbers for packages such as JupterLab and Notebook. This issue is now resolved.

RHOAIENG-880 - Default pipelines service account is unable to create Ray clusters

Previously, you could not create Ray clusters using the default pipelines Service Account. This issue is now resolved.

RHOAIENG-52 - Token authentication fails in clusters with self-signed certificates

Previously, if you used self-signed certificates, and you used the Python codeflare-sdk in a notebook or in a Python script as part of a pipeline, token authentication would fail. This issue is now resolved.

RHOAIENG-7312 - Model serving fails during query with token authentication in KServe

Previously, if you enabled both the ModelMesh and KServe components in your DataScienceCluster object and added Authorino as an authorization provider, a race condition could occur that resulted in the odh-model-controller pods being rolled out in a state that is appropriate for ModelMesh, but not for KServe and Authorino. In this situation, if you made an inference request to a running model that was deployed using KServe, you saw a 404 - Not Found error. In addition, the logs for the odh-model-controller deployment object showed a Reconciler error message. This issue is now resolved.

RHOAIENG-7181 (previously documented as RHOAIENG-6343)- Some components are set to Removed after installing OpenShift AI

Previously, after you installed OpenShift AI, the managementState field for the codeflare, kueue, and ray components was incorrectly set to Removed instead of Managed in the DataScienceCluster custom resource. This issue is now resolved.

RHOAIENG-7079 (previously documented as RHOAIENG-6317) - Pipeline task status and logs sometimes not shown in OpenShift AI dashboard

Previously, when running pipelines by using Elyra, the OpenShift AI dashboard might not show the pipeline task status and logs, even when the related pods had not been pruned and the information was still available in the OpenShift Console. This issue is now resolved.

RHOAIENG-7070 (previously documented as RHOAIENG-6709) - Jupyter notebook creation might fail when different environment variables specified

Previously, if you started and then stopped a Jupyter notebook, and edited its environment variables in an OpenShift AI workbench, the notebook failed to restart. This issue is now resolved.

RHOAIENG-6853 - Cannot set pod toleration in Elyra pipeline pods

Previously, if you set a pod toleration for an Elyra pipeline pod, the toleration did not take effect. This issue is now resolved.

RHOAIENG-5314 - Data science pipeline server fails to deploy in fresh cluster due to network policies

Previously, if you created a data science pipeline server on a fresh cluster, the user interface remained in a loading state and the pipeline server did not start. This issue is now resolved.

RHOAIENG-4252 - Data science pipeline server deletion process fails to remove ScheduledWorkFlow resource

Previously, the pipeline server deletion process did not remove the ScheduledWorkFlow resource. As a result, new DataSciencePipelinesApplications (DSPAs) did not recognize the redundant ScheduledWorkFlow resource. This issue is now resolved

RHOAIENG-3411 (previously documented as RHOAIENG-3378) - Internal Image Registry is an undeclared hard dependency for Jupyter notebooks spawn process

Previously, before you could start OpenShift AI notebooks and workbenches, you must have already enabled the internal, integrated container image registry in OpenShift. Attempts to start notebooks or workbenches without first enabling the image registry failed with an "InvalidImageName" error. You can now create and use workbenches in OpenShift AI without enabling the internal OpenShift image registry. If you update a cluster to enable or disable the internal image registry, you must recreate existing workbenches for the registry changes to take effect.

RHOAIENG-2541 - KServe controller pod experiences OOM because of too many secrets in the cluster

Previously, if your OpenShift cluster had a large number of secrets, the KServe controller pod could continually crash due to an out-of-memory (OOM) error. This issue is now resolved.

RHOAIENG-1452 - The Red Hat OpenShift AI Add-on gets stuck

Previously, the Red Hat OpenShift AI Add-on uninstall did not delete OpenShift AI components when the install was triggered via OCM APIs. This issue is now resolved.

RHOAIENG-307 - Removing the DataScienceCluster deletes all OpenShift Serverless CRs

Previously, if you deleted the DataScienceCluster custom resource (CR), all OpenShift Serverless CRs (including knative-serving, deployments, gateways, and pods) were also deleted. This issue is now resolved.

RHOAIENG-6709 - Jupyter notebook creation might fail when different environment variables specified

Previously, if you started and then stopped a Jupyter notebook, and edited its environment variables in an OpenShift AI workbench, the notebook failed to restart. This issue is now resolved.

RHOAIENG-6701 - Users without cluster administrator privileges cannot access the job submission endpoint of the Ray dashboard

Previously, users of the distributed workloads feature who did not have cluster administrator privileges for OpenShift might not have been able to access or use the job submission endpoint of the Ray dashboard. This issue is now resolved.

RHOAIENG-6578 - Request without token to a protected inference point not working by default

Previously, if you added Authorino as an authorization provider for the single-model serving platform and enabled token authorization for models that you deployed, it was still possible to query the models without specifying the tokens. This issue is now resolved.

RHOAIENG-6343 - Some components are set to Removed after installing OpenShift AI

Previously, after you installed OpenShift AI, the managementState field for the codeflare, kueue, and ray components was incorrectly set to Removed instead of Managed in the DataScienceCluster custom resource. This issue is now resolved.

RHOAIENG-5067 - Model server metrics page does not load for a model server based on the ModelMesh component

Previously, data science project names that contained capital letters or spaces could cause issues on the model server metrics page for model servers based on the ModelMesh component. The metrics page might not have received data correctly, resulting in a 400 Bad Request error and preventing the page from loading. This issue is now resolved.

RHOAIENG-4966 - Self-signed certificates in a custom CA bundle might be missing from the odh-trusted-ca-bundle configuration map

Previously, if you added a custom certificate authority (CA) bundle to use self-signed certificates, sometimes the custom certificates were missing from the odh-trusted-ca-bundle ConfigMap, or the non-reserved namespaces did not contain the odh-trusted-ca-bundle ConfigMap when the ConfigMap was set to managed. This issue is now resolved.

RHOAIENG-4938 (previously documented as RHOAIENG-4327) - Workbenches do not use the self-signed certificates from centrally configured bundle automatically

There are two bundle options to include self-signed certificates in OpenShift AI, ca-bundle.crt and odh-ca-bundle.crt. Previously, workbenches did not automatically use the self-signed certificates from the centrally configured bundle and you had to define environment variables that pointed to your certificate path. This issue is now resolved.

RHOAIENG-4572- Unable to run data science pipelines after install and upgrade in certain circumstances

Previously, you were unable to run data science pipelines after installing or upgrading OpenShift AI in the following circumstances:

  • You installed OpenShift AI and you had a valid CA certificate. Within the default-dsci object, you changed the managementState field for the trustedCABundle field to Removed post-installation.
  • You upgraded OpenShift AI from version 2.6 to version 2.8 and you had a valid CA certificate.
  • You upgraded OpenShift AI from version 2.7 to version 2.8 and you had a valid CA certificate.

This issue is now resolved.

RHOAIENG-4524 - BuildConfig definitions for RStudio images contain occurrences of incorrect branch

Previously, the BuildConfig definitions for the RStudio and CUDA - RStudio workbench images pointed to the wrong branch in OpenShift AI. This issue is now resolved.

RHOAIENG-3963 - Unnecessary managed resource warning

Previously, when you edited and saved the OdhDashboardConfig custom resource for the redhat-ods-applications project, the system incorrectly displayed a Managed resource warning message. This issue is now resolved.

RHOAIENG-2542 - Inference service pod does not always get an Istio sidecar

Previously, when you deployed a model using the single-model serving platform (which uses KServe), the istio-proxy container could be missing in the resulting pod, even if the inference service had the sidecar.istio.io/inject=true annotation. This issue is now resolved.

RHOAIENG-1666 - The Import Pipeline button is prematurely accessible

Previously, when you imported a pipeline to a workbench that belonged to a data science project, the Import Pipeline button was accessible before the pipeline server was fully available. This issue is now resolved.

RHOAIENG-673 (previously documented as RHODS-12946) - Cannot install from PyPI mirror in disconnected environment or when using private certificates

In disconnected environments, Red Hat OpenShift AI cannot connect to the public-facing PyPI repositories, so you must specify a repository inside your network. Previously, if you were using private TLS certificates and a data science pipeline was configured to install Python packages, the pipeline run would fail. This issue is now resolved.

RHOAIENG-3355 - OVMS on KServe does not use accelerators correctly

Previously, when you deployed a model using the single-model serving platform and selected the OpenVINO Model Server serving runtime, if you requested an accelerator to be attached to your model server, the accelerator hardware was detected but was not used by the model when responding to queries. This issue is now resolved.

RHOAIENG-2869 - Cannot edit existing model framework and model path in a multi-model project

Previously, when you tried to edit a model in a multi-model project using the Deploy model dialog, the Model framework and Path values did not update. This issue is now resolved.

RHOAIENG-2724 - Model deployment fails because fields automatically reset in dialog

Previously, when you deployed a model or edited a deployed model, the Model servers and Model framework fields in the "Deploy model" dialog might have reset to the default state. The Deploy button might have remained enabled even though these mandatory fields no longer contained valid values. This issue is now resolved.

RHOAIENG-2099 - Data science pipeline server fails to deploy in fresh cluster

Previously, when you created a data science pipeline server on a fresh cluster, the user interface remained in a loading state and the pipeline server did not start. This issue is now resolved.

RHOAIENG-1199 (previously documented as ODH-DASHBOARD-1928) - Custom serving runtime creation error message is unhelpful

Previously, when you tried to create or edit a custom model-serving runtime and an error occurred, the error message did not indicate the cause of the error. The error messages have been improved.

RHOAIENG-556 - ServingRuntime for KServe model is created regardless of error

Previously, when you tried to deploy a KServe model and an error occurred, the InferenceService custom resource (CR) was still created and the model was shown in the Data science projects page, but the status would always remain unknown. The KServe deploy process has been updated so that the ServingRuntime is not created if an error occurs.

RHOAIENG-548 (previously documented as ODH-DASHBOARD-1776) - Error messages when user does not have project administrator permission

Previously, if you did not have administrator permission for a project, you could not access some features, and the error messages did not explain why. For example, when you created a model server in an environment where you only had access to a single namespace, an Error creating model server error message appeared. However, the model server is still successfully created. This issue is now resolved.

RHOAIENG-66 - Ray dashboard route deployed by CodeFlare SDK exposes self-signed certs instead of cluster cert

Previously, when you deployed a Ray cluster by using the CodeFlare SDK with the openshift_oauth=True option, the resulting route for the Ray cluster was secured by using the passthrough method and as a result, the self-signed certificate used by the OAuth proxy was exposed. This issue is now resolved.

RHOAIENG-12 - Cannot access Ray dashboard from some browsers

In some browsers, users of the distributed workloads feature might not have been able to access the Ray dashboard because the browser automatically changed the prefix of the dashboard URL from http to https. This issue is now resolved.

RHODS-6216 - The ModelMesh oauth-proxy container is intermittently unstable

Previously, ModelMesh pods did not deploy correctly due to a failure of the ModelMesh oauth-proxy container. This issue occurred intermittently and only if authentication was enabled in the ModelMesh runtime environment. This issue is now resolved.

RHOAIENG-535 - Metrics graph showing HTTP requests for deployed models is incorrect if there are no HTTP requests

Previously, if a deployed model did not receive at least one HTTP request for each of the two data types (success and failed), the graphs that show HTTP request performance metrics (for all models on the model server or for the specific model) rendered incorrectly, with a straight line that indicated a steadily increasing number of failed requests. This issue is now resolved.

RHOAIENG-1467 - Serverless net-istio controller pod might hit OOM

Previously, the Knative net-istio-controller pod (which is a dependency for KServe) might continuously crash due to an out-of-memory (OOM) error. This issue is now resolved.

RHOAIENG-1899 (previously documented as RHODS-6539) - The Anaconda Professional Edition cannot be validated and enabled

Previously, you could not enable the Anaconda Professional Edition because the dashboard’s key validation for it was inoperable. This issue is now resolved.

RHOAIENG-2269 - (Single-model) Dashboard fails to display the correct number of model replicas

Previously, on a single-model serving platform, the Models and model servers section of a data science project did not show the correct number of model replicas. This issue is now resolved.

RHOAIENG-2270 - (Single-model) Users cannot update model deployment settings

Previously, you couldn’t edit the deployment settings (for example, the number of replicas) of a model you deployed with a single-model serving platform. This issue is now resolved.

RHODS-8865 - A pipeline server fails to start unless you specify an Amazon Web Services (AWS) Simple Storage Service (S3) bucket resource

Previously, when you created a data connection for a data science project, the AWS_S3_BUCKET field was not designated as a mandatory field. However, if you attempted to configure a pipeline server with a data connection where the AWS_S3_BUCKET field was not populated, the pipeline server failed to start successfully. This issue is now resolved. The Configure pipeline server dialog has been updated to include the Bucket field as a mandatory field.

RHODS-12899 - OpenVINO runtime missing annotation for NVIDIA GPUs

Previously, if a user selected the OpenVINO model server (supports GPUs) runtime and selected an NVIDIA GPU accelerator in the model server user interface, the system could display a unnecessary warning that the selected accelerator was not compatible with the selected runtime. The warning is no longer displayed.

RHOAIENG-84 - Cannot use self-signed certificates with KServe

Previously, the single-model serving platform did not support self-signed certificates. This issue is now resolved. To use self-signed certificates with KServe, follow the steps described in Working with certificates.

RHOAIENG-164 - Number of model server replicas for Kserve is not applied correctly from the dashboard

Previously, when you set a number of model server replicas different from the default (1), the model (server) was still deployed with 1 replica. This issue is now resolved.

RHOAIENG-288 - Recommended image version label for workbench is shown for two versions

Most of the workbench images that are available in OpenShift AI are provided in multiple versions. The only recommended version is the latest version. In Red Hat OpenShift AI 2.4 and 2.5, the Recommended tag was erroneously shown for multiple versions of an image. This issue is now resolved.

RHOAIENG-293 - Deprecated ModelMesh monitoring stack not deleted after upgrading from 2.4 to 2.5

In Red Hat OpenShift AI 2.5, the former ModelMesh monitoring stack was no longer deployed because it was replaced by user workload monitoring. However, the former monitoring stack was not deleted during an upgrade to OpenShift AI 2.5. Some components remained and used cluster resources. This issue is now resolved.

RHOAIENG-343 - Manual configuration of OpenShift Service Mesh and OpenShift Serverless does not work for KServe

If you installed OpenShift Serverless and OpenShift Service Mesh and then installed Red Hat OpenShift AI with KServe enabled, KServe was not deployed. This issue is now resolved.

RHOAIENG-517 - User with edit permissions cannot see created models

A user with edit permissions could not see any created models, unless they were the project owner or had admin permissions for the project. This issue is now resolved.

RHOAIENG-804 - Cannot deploy Large Language Models with KServe on FIPS-enabled clusters

Previously, Red Hat OpenShift AI was not yet fully designed for FIPS. You could not deploy Large Language Models (LLMs) with KServe on FIPS-enabled clusters. This issue is now resolved.

RHOAIENG-908 - Cannot use ModelMesh if KServe was previously enabled and then removed

Previously, when both ModelMesh and KServe were enabled in the DataScienceCluster object, and you subsequently removed KServe, you could no longer deploy new models with ModelMesh. You could continue to use models that were previously deployed with ModelMesh. This issue is now resolved.

RHOAIENG-2184 - Cannot create Ray clusters or distributed workloads

Previously, users could not create Ray clusters or distributed workloads in namespaces where they have admin or edit permissions. This issue is now resolved.

ODH-DASHBOARD-1991 - ovms-gpu-ootb is missing recommended accelerator annotation

Previously, when you added a model server to your project, the Serving runtime list did not show the Recommended serving runtime label for the NVIDIA GPU. This issue is now resolved.

RHOAIENG-807 - Accelerator profile toleration removed when restarting a workbench

Previously, if you created a workbench that used an accelerator profile that in turn included a toleration, restarting the workbench removed the toleration information, which meant that the restart could not complete. A freshly created GPU-enabled workbench might start the first time, but never successfully restarted afterwards because the generated pod remained forever pending. This issue is now resolved.

DATA-SCIENCE-PIPELINES-OPERATOR-294 - Scheduled pipeline run that uses data-passing might fail to pass data between steps, or fail the step entirely

A scheduled pipeline run that uses an S3 object store to store the pipeline artifacts might fail with an error such as the following:

Bad value for --endpoint-url "cp": scheme is missing. Must be of the form http://<hostname>/ or https://<hostname>/
Copy to Clipboard Toggle word wrap

This issue occurred because the S3 object store endpoint was not successfully passed to the pods for the scheduled pipeline run. This issue is now resolved.

RHODS-4769 - GPUs on nodes with unsupported taints cannot be allocated to notebook servers

GPUs on nodes marked with any taint other than the supported nvidia.com/gpu taint could not be selected when creating a notebook server. This issue is now resolved.

RHODS-6346 - Unclear error message displays when using invalid characters to create a data science project

When creating a data science project’s data connection, workbench, or storage connection using invalid special characters, the following error message was displayed:

the object provided is unrecognized (must be of type Secret): couldn't get version/kind; json parse error: unexpected end of JSON input ({"apiVersion":"v1","kind":"Sec ...)
Copy to Clipboard Toggle word wrap

The error message failed to clearly indicate the problem. The error message now indicates that invalid characters were entered.

RHODS-6950 - Unable to scale down workbench GPUs when all GPUs in the cluster are being used

In earlier releases, it was not possible to scale down workbench GPUs if all GPUs in the cluster were being used. This issue applied to GPUs being used by one workbench, and GPUs being used by multiple workbenches. You can now scale down the GPUs by selecting None from the Accelerators list.

RHODS-8939 - Default shared memory for a Jupyter notebook created in a previous release causes a runtime error

Starting with release 1.31, this issue is resolved, and the shared memory for any new notebook is set to the size of the node.

For a Jupyter notebook created in a release earlier than 1.31, the default shared memory for a Jupyter notebook is set to 64 MB and you cannot change this default value in the notebook configuration.

To fix this issue, you must recreate the notebook or follow the process described in the Knowledgebase article How to change the shared memory for a Jupyter notebook in Red Hat OpenShift AI.

RHODS-9030 - Uninstall process for OpenShift AI might become stuck when removing kfdefs resources

The steps for uninstalling the OpenShift AI managed service are described in Uninstalling OpenShift AI.

However, even when you followed this guide, you might have seen that the uninstall process did not finish successfully. Instead, the process stayed on the step of deleting kfdefs resources that were used by the Kubeflow Operator. As shown in the following example, kfdefs resources might exist in the redhat-ods-applications, redhat-ods-monitoring, and rhods-notebooks namespaces:

$ oc get kfdefs.kfdef.apps.kubeflow.org -A

NAMESPACE                  NAME                                   AGE
redhat-ods-applications    rhods-anaconda                         3h6m
redhat-ods-applications    rhods-dashboard                        3h6m
redhat-ods-applications    rhods-data-science-pipelines-operator  3h6m
redhat-ods-applications    rhods-model-mesh                       3h6m
redhat-ods-applications    rhods-nbc                              3h6m
redhat-ods-applications    rhods-osd-config                       3h6m
redhat-ods-monitoring      modelmesh-monitoring                   3h6m
redhat-ods-monitoring      monitoring                             3h6m
rhods-notebooks            rhods-notebooks                        3h6m
rhods-notebooks            rhods-osd-config                       3h5m
Copy to Clipboard Toggle word wrap

Failed removal of the kfdefs resources might have also prevented later installation of a newer version of OpenShift AI. This issue no longer occurs.

RHODS-9764 - Data connection details get reset when editing a workbench

When you edited a workbench that had an existing data connection and then selected the Create new data connection option, the edit page might revert to the Use existing data connection option before you had finished specifying the new connection details. This issue is now resolved.

RHODS-9583 - Data Science dashboard did not detect an existing OpenShift Pipelines installation

When the OpenShift Pipelines Operator was installed as a global operator on your cluster, the OpenShift AI dashboard did not detect it. The OpenShift Pipelines Operator is now detected successfully.

ODH-DASHBOARD-1639 - Wrong TLS value in dashboard route

Previously, when a route was created for the OpenShift AI dashboard on OpenShift, the tls.termination field had an invalid default value of Reencrypt. This issue is now resolved. The new value is reencrypt.

ODH-DASHBOARD-1638 - Name placeholder in Triggered Runs tab shows Scheduled run name

Previously, when you clicked Pipelines > Runs and then selected the Triggered tab to configure a triggered run, the example value shown in the Name field was Scheduled run name. This issue is now resolved.

ODH-DASHBOARD-1547 - "We can’t find that page" message displayed in dashboard when pipeline operator installed in background

Previously, when you used the Data Science Pipelines page of the dashboard to install the OpenShift Pipelines Operator, when the Operator installation was complete, the page refreshed to show a We can't find that page message. This issue is now resolved. When the Operator installation is complete, the dashboard redirects you to the Pipelines page, where you can create a pipeline server.

ODH-DASHBOARD-1545 - Dashboard keeps scrolling to bottom of project when Models tab is expanded

Previously, on the Data science projects page of the dashboard, if you clicked the Deployed models tab to expand it and then tried to perform other actions on the page, the page automatically scrolled back to the Deployed models section. This affected your ability to perform other actions. This issue is now resolved.

NOTEBOOKS-156 - Elyra included an example runtime called Test

Previously, Elyra included an example runtime configuration called Test. If you selected this configuration when running a data science pipeline, you could see errors. The Test configuration has now been removed.

RHODS-9622 - Duplicating a scheduled pipeline run does not copy the existing period and pipeline input parameter values

Previously, when you duplicated a scheduled pipeline run that had a periodic trigger, the duplication process did not copy the configured execution frequency for the recurring run or the specified pipeline input parameters. This issue is now resolved.

RHODS-8932 - Incorrect cron format was displayed by default when scheduling a recurring pipeline run

When you scheduled a recurring pipeline run by configuring a cron job, the OpenShift AI interface displayed an incorrect format by default. It now displays the correct format.

RHODS-9374 - Pipelines with non-unique names did not appear in the data science project user interface

If you launched a notebook from a Jupyter application that supported Elyra, or if you used a workbench, when you submitted a pipeline to be run, pipelines with non-unique names did not appear in the Pipelines section of the relevant data science project page or the Pipelines heading of the data science pipelines page. This issue has now been resolved.

RHODS-9329 - Deploying a custom model-serving runtime could result in an error message

Previously, if you used the OpenShift AI dashboard to deploy a custom model-serving runtime, the deployment process could fail with an Error retrieving Serving Runtime message. This issue is now resolved.

RHODS-9064 - After upgrade, the Data Science Pipelines tab was not enabled on the OpenShift AI dashboard

When you upgraded from OpenShift AI 1.26 to OpenShift AI 1.28, the Data Science Pipelines tab was not enabled in the OpenShift AI dashboard. This issue is resolved in OpenShift AI 1.29.

RHODS-9443 - Exporting an Elyra pipeline exposed S3 storage credentials in plain text

In OpenShift AI 1.28.0, when you exported an Elyra pipeline from JupyterLab in Python DSL format or YAML format, the generated output contained S3 storage credentials in plain text. This issue has been resolved in OpenShift AI 1.28.1. However, after you upgrade to OpenShift AI 1.28.1, if your deployment contains a data science project with a pipeline server and a data connection, you must perform the following additional actions for the fix to take effect:

  1. Refresh your browser page.
  2. Stop any running workbenches in your deployment and restart them.

Furthermore, to confirm that your Elyra runtime configuration contains the fix, perform the following actions:

  1. In the left sidebar of JupyterLab, click Runtimes ( The Runtimes icon ).
  2. Hover the cursor over the runtime configuration that you want to view and click the Edit button ( Edit runtime configuration ).

    The Data Science Pipelines runtime configuration page opens.

  3. Confirm that KUBERNETES_SECRET is defined as the value in the Cloud Object Storage Authentication Type field.
  4. Close the runtime configuration without changing it.

RHODS-8460 - When editing the details of a shared project, the user interface remained in a loading state without reporting an error

When a user with permission to edit a project attempted to edit its details, the user interface remained in a loading state and did not display an appropriate error message. Users with permission to edit projects cannot edit any fields in the project, such as its description. Those users can edit only components belonging to a project, such as its workbenches, data connections, and storage.

The user interface now displays an appropriate error message and does not try to update the project description.

RHODS-8482 - Data science pipeline graphs did not display node edges for running pipelines

If you ran pipelines that did not contain Tekton-formatted Parameters or when expressions in their YAML code, the OpenShift AI user interface did not display connecting edges to and from graph nodes. For example, if you used a pipeline containing the runAfter property or Workspaces, the user interface displayed the graph for the executed pipeline without edge connections. The OpenShift AI user interface now displays connecting edges to and from graph nodes.

RHODS-8923 - Newly created data connections were not detected when you attempted to create a pipeline server

If you created a data connection from within a Data Science project, and then attempted to create a pipeline server, the Configure a pipeline server dialog did not detect the data connection that you created. This issue is now resolved.

RHODS-8461 - When sharing a project with another user, the OpenShift AI user interface text was misleading

When you attempted to share a Data Science project with another user, the user interface text misleadingly implied that users could edit all of its details, such as its description. However, users can edit only components belonging to a project, such as its workbenches, data connections, and storage. This issue is now resolved and the user interface text no longer misleadingly implies that users can edit all of its details.

RHODS-8462 - Users with "Edit" permission could not create a Model Server

Users with "Edit" permissions can now create a Model Server without token authorization. Users must have "Admin" permissions to create a Model Server with token authorization.

RHODS-8796 - OpenVINO Model Server runtime did not have the required flag to force GPU usage

OpenShift AI includes the OpenVINO Model Server (OVMS) model-serving runtime by default. When you configured a new model server and chose this runtime, the Configure model server dialog enabled you to specify a number of GPUs to use with the model server. However, when you finished configuring the model server and deployed models from it, the model server did not actually use any GPUs. This issue is now resolved and the model server uses the GPUs.

RHODS-8861 - Changing the host project when creating a pipeline ran resulted in an inaccurate list of available pipelines

If you changed the host project while creating a pipeline run, the interface failed to make the pipelines of the new host project available. Instead, the interface showed pipelines that belong to the project you initially selected on the Data Science Pipelines > Runs page. This issue is now resolved. You no longer select a pipeline from the Create run page. The pipeline selection is automatically updated when you click the Create run button, based on the current project and its pipeline.

RHODS-8249 - Environment variables uploaded as ConfigMap were stored in Secret instead

Previously, in the OpenShift AI interface, when you added environment variables to a workbench by uploading a ConfigMap configuration, the variables were stored in a Secret object instead. This issue is now resolved.

RHODS-7975 - Workbenches could have multiple data connections

Previously, if you changed the data connection for a workbench, the existing data connection was not released. As a result, a workbench could stay connected to multiple data sources. This issue is now resolved.

RHODS-7948 - Uploading a secret file containing environment variables resulted in double-encoded values

Previously, when creating a workbench in a data science project, if you uploaded a YAML-based secret file containing environment variables, the environment variable values were not decoded. Then, in the resulting OpenShift secret created by this process, the encoded values were encoded again. This issue is now resolved.

RHODS-6429 - An error was displayed when creating a workbench with the Intel OpenVINO or Anaconda Professional Edition images

Previously, when you created a workbench with the Intel OpenVINO or Anaconda Professional Edition images, an error appeared during the creation process. However, the workbench was still successfully created. This issue is now resolved.

RHODS-6372 - Idle notebook culler did not take active terminals into account

Previously, if a notebook image had a running terminal, but no active, running kernels, the idle notebook culler detected the notebook as inactive and stopped the terminal. This issue is now resolved.

RHODS-5700 - Data connections could not be created or connected to when creating a workbench

When creating a workbench, users were unable to create a new data connection, or connect to existing data connections.

RHODS-6281 - OpenShift AI administrators could not access Settings page if an admin group was deleted from cluster

Previously, if a Red Hat OpenShift AI administrator group was deleted from the cluster, OpenShift AI administrator users could no longer access the Settings page on the OpenShift AI dashboard. In particular, the following behavior was seen:

  • When an OpenShift AI administrator user tried to access the SettingsUser management page, a "Page Not Found" error appeared.
  • Cluster administrators did not lose access to the Settings page on the OpenShift AI dashboard. When a cluster administrator accessed the SettingsUser management page, a warning message appeared, indicating that the deleted OpenShift AI administrator group no longer existed in OpenShift. The deleted administrator group was then removed from OdhDashboardConfig, and administrator access was restored.

This issue is now resolved.

RHODS-1968 - Deleted users stayed logged in until dashboard was refreshed

Previously, when a user’s permissions for the Red Hat OpenShift AI dashboard were revoked, the user would notice the change only after a refresh of the dashboard page.

This issue is now resolved. When a user’s permissions are revoked, the OpenShift AI dashboard locks the user out within 30 seconds, without the need for a refresh.

RHODS-6384 - A workbench data connection was incorrectly updated when creating a duplicated data connection

When creating a data connection that contained the same name as an existing data connection, the data connection creation failed, but the associated workbench still restarted and connected to the wrong data connection. This issue has been resolved. Workbenches now connect to the correct data connection.

RHODS-6370 - Workbenches failed to receive the latest toleration

Previously, to acquire the latest toleration, users had to attempt to edit the relevant workbench, make no changes, and save the workbench again. Users can now apply the latest toleration change by stopping and then restarting their data science project’s workbench.

RHODS-6779 - Models failed to be served after upgrading from OpenShift AI 1.20 to OpenShift AI 1.21

When upgrading from OpenShift AI 1.20 to OpenShift AI 1.21, the modelmesh-serving pod attempted to pull a non-existent image, causing an image pull error. As a result, models were unable to be served using the model serving feature in OpenShift AI. The odh-openvino-servingruntime-container-v1.21.0-15 image now deploys successfully.

RHODS-5945 - Anaconda Professional Edition could not be enabled in OpenShift AI

Anaconda Professional Edition could not be enabled for use in OpenShift AI. Instead, an InvalidImageName error was displayed in the associated pod’s Events page. Anaconda Professional Edition can now be successfully enabled.

RHODS-5822 - Admin users were not warned when usage exceeded 90% and 100% for PVCs created by data science projects.

Warnings indicating when a PVC exceeded 90% and 100% of its capacity failed to display to admin users for PVCs created by data science projects. Admin users can now view warnings about when a PVC exceeds 90% and 100% of its capacity from the dashboard.

RHODS-5889 - Error message was not displayed if a data science notebook was stuck in "pending" status

If a notebook pod could not be created, the OpenShift AI interface did not show an error message. An error message is now displayed if a data science notebook cannot be spawned.

RHODS-5886 - Returning to the Hub Control Panel dashboard from the data science workbench failed

If you attempted to return to the dashboard from your workbench Jupyter notebook by clicking on FileLog Out, you were redirected to the dashboard and remained on a "Logging out" page. Likewise, if you attempted to return to the dashboard by clicking on FileHub Control Panel, you were incorrectly redirected to the Start a notebook server page. Returning to the Hub Control Panel dashboard from the data science workbench now works as expected.

RHODS-6101 - Administrators were unable to stop all notebook servers

OpenShift AI administrators could not stop all notebook servers simultaneously. Administrators can now stop all notebook servers using the Stop all servers button and stop a single notebook by selecting Stop server from the action menu beside the relevant user.

RHODS-5891 - Workbench event log was not clearly visible

When creating a workbench, users could not easily locate the event log window in the OpenShift AI interface. The Starting label under the Status column is now underlined when you hover over it, indicating you can click on it to view the notebook status and the event log.

RHODS-6296 - ISV icons did not render when using a browser other than Google Chrome

When using a browser other than Google Chrome, not all ISV icons under Explore and Resources pages were rendered. ISV icons now display properly on all supported browsers.

RHODS-3182 - Incorrect number of available GPUs was displayed in Jupyter

When a user attempts to create a notebook instance in Jupyter, the maximum number of GPUs available for scheduling was not updated as GPUs are assigned. Jupyter now displays the correct number of GPUs available.

RHODS-5890 - When multiple persistent volumes were mounted to the same directory, workbenches failed to start

When mounting more than one persistent volume (PV) to the same mount folder in the same workbench, creation of the notebook pod failed and no errors were displayed to indicate there was an issue.

RHODS-5768 - Data science projects were not visible to users in Red Hat OpenShift AI

Removing the [DSP] suffix at the end of a project’s Display Name property caused the associated data science project to no longer be visible. It is no longer possible for users to remove this suffix.

RHODS-5701 - Data connection configuration details were overwritten

When a data connection was added to a workbench, the configuration details for that data connection were saved in environment variables. When a second data connection was added, the configuration details are saved using the same environment variables, which meant the configuration for the first data connection was overwritten. At the moment, users can add a maximum of one data connection to each workbench.

RHODS-5252 - The notebook Administration page did not provide administrator access to a user’s notebook server

The notebook Administration page, accessed from the OpenShift AI dashboard, did not provide the means for an administrator to access a user’s notebook server. Administrators were restricted to only starting or stopping a user’s notebook server.

RHODS-2438 - PyTorch and TensorFlow images were unavailable when upgrading

When upgrading from OpenShift AI 1.3 to a later version, PyTorch and TensorFlow images were unavailable to users for approximately 30 minutes. As a result, users were unable to start PyTorch and TensorFlow notebooks in Jupyter during the upgrade process. This issue has now been resolved.

RHODS-5354 - Environment variable names were not validated when starting a notebook server

Environment variable names were not validated on the Start a notebook server page. If an invalid environment variable was added, users were unable to successfully start a notebook. The environmental variable name is now checked in real-time. If an invalid environment variable name is entered, an error message displays indicating valid environment variable names must consist of alphabetic characters, digits, _, -, or ., and must not start with a digit.

RHODS-4617 - The Number of GPUs drop-down was only visible if there were GPUs available

Previously, the Number of GPUs drop-down was only visible on the Start a notebook server page if GPU nodes were available. The Number of GPUs drop-down now also correctly displays if an autoscaling machine pool is defined in the cluster, even if no GPU nodes are currently available, possibly resulting in the provisioning of a new GPU node on the cluster.

RHODS-5420 - Cluster admin did not get administrator access if it was the only user present in the cluster

Previously, when the cluster admin was the only user present in the cluster, it did not get Red Hat OpenShift administrator access automatically. Administrator access is now correctly applied to the cluster admin user.

RHODS-4321 - Incorrect package version displayed during notebook selection

The Start a notebook server page displayed an incorrect version number (11.4 instead of 11.7) for the CUDA notebook image. The version of CUDA installed is no longer specified on this page.

RHODS-5001 - Admin users could add invalid tolerations to notebook pods

An admin user could add invalid tolerations on the Cluster settings page without triggering an error. If a invalid toleration was added, users were unable to successfully start notebooks. The toleration key is now checked in real-time. If an invalid toleration name is entered, an error message displays indicating valid toleration names consist of alphanumeric characters, -, _, or ., and must start and end with an alphanumeric character.

RHODS-5100 - Group role bindings were not applied to cluster administrators

Previously, if you had assigned cluster admin privileges to a group rather than a specific user, the dashboard failed to recognize administrative privileges for users in the administrative group. Group role bindings are now correctly applied to cluster administrators as expected.

RHODS-4947 - Old Minimal Python notebook image persisted after upgrade

After upgrading from OpenShift AI 1.14 to 1.15, the older version of the Minimal Python notebook persisted, including all associated package versions. The older version of the Minimal Python notebook no longer persists after upgrade.

RHODS-4935 - Excessive "missing x-forwarded-access-token header" error messages displayed in dashboard log

The rhods-dashboard pod’s log contained an excessive number of "missing x-forwarded-access-token header" error messages due to a readiness probe hitting the /status endpoint. This issue has now been resolved.

RHODS-2653 - Error occurred while fetching the generated images in the sample Pachyderm notebook

An error occurred when a user attempted to fetch an image using the sample Pachyderm notebook in Jupyter. The error stated that the image could not be found. Pachyderm has corrected this issue.

RHODS-4584 - Jupyter failed to start a notebook server using the OpenVINO notebook image

Jupyter’s Start a notebook server page failed to start a notebook server using the OpenVINO notebook image. Intel has provided an update to the OpenVINO operator to correct this issue.

RHODS-4923 - A non-standard check box displayed after disabling usage data collection

After disabling usage data collection on the Cluster settings page, when a user accessed another area of the OpenShift AI dashboard, and then returned to the Cluster settings page, the Allow collection of usage data check box had a non-standard style applied, and therefore did not look the same as other check boxes when selected or cleared.

RHODS-4938 - Incorrect headings were displayed in the Notebook Images page

The Notebook Images page, accessed from the Settings page on the OpenShift AI dashboard, displayed incorrect headings in the user interface. The Notebook image settings heading displayed as BYON image settings, and the Import Notebook images heading displayed as Import BYON images. The correct headings are now displayed as expected.

RHODS-4818 - Jupyter was unable to display images when the NVIDIA GPU add-on was installed

The Start a notebook server page did not display notebook images after installing the NVIDIA GPU add-on. Images are now correctly displayed, and can be started from the Start a notebook server page.

RHODS-4797 - PVC usage limit alerts were not sent when usage exceeded 90% and 100%

Alerts indicating when a PVC exceeded 90% and 100% of its capacity failed to be triggered and sent. These alerts are now triggered and sent as expected.

RHODS-4366 - Cluster settings were reset on operator restart

When the OpenShift AI operator pod was restarted, cluster settings were sometimes reset to their default values, removing any custom configuration. The OpenShift AI operator was restarted when a new version of OpenShift AI was released, and when the node that ran the operator failed. This issue occurred because the operator deployed ConfigMaps incorrectly. Operator deployment instructions have been updated so that this no longer occurs.

RHODS-4318 - The OpenVINO notebook image failed to build successfully

The OpenVINO notebook image failed to build successfully and displayed an error message. This issue has now been resolved.

RHODS-3743 - Starburst Galaxy quick start did not provide download link in the instruction steps

The Starburst Galaxy quick start, located on the Resources page on the dashboard, required the user to open the explore-data.ipynb notebook, but failed to provide a link within the instruction steps. Instead, the link was provided in the quick start’s introduction.

RHODS-1974 - Changing alert notification emails required pod restart

Changes to the list of notification email addresses in the Red Hat OpenShift AI Add-On were not applied until after the rhods-operator pod and the prometheus-* pod were restarted.

RHODS-2738 - Red Hat OpenShift API Management 1.15.2 add-on installation did not successfully complete

For OpenShift AI installations that are integrated with the Red Hat OpenShift API Management 1.15.2 add-on, the Red Hat OpenShift API Management installation process did not successfully obtain the SMTP credentials secret. Subsequently, the installation did not complete.

RHODS-3237 - GPU tutorial did not appear on dashboard

The "GPU computing" tutorial, located at Gtc2018-numba, did not appear on the Resources page on the dashboard.

RHODS-3069 - GPU selection persisted when GPU nodes were unavailable

When a user provisioned a notebook server with GPU support, and the utilized GPU nodes were subsequently removed from the cluster, the user could not create a notebook server. This occurred because the most recently used setting for the number of attached GPUs was used by default.

RHODS-3181 - Pachyderm now compatible with OpenShift Dedicated 4.10 clusters

Pachyderm was not initially compatible with OpenShift Dedicated 4.10, and so was not available in OpenShift AI running on an OpenShift Dedicated 4.10 cluster. Pachyderm is now available on and compatible with OpenShift Dedicated 4.10.

RHODS-2160 - Uninstall process failed to complete when both OpenShift AI and OpenShift API Management were installed

When OpenShift AI and OpenShift API Management are installed together on the same cluster, they use the same Virtual Private Cluster (VPC). The uninstall process for these Add-ons attempts to delete the VPC. Previously, when both Add-ons are installed, the uninstall process for one service was blocked because the other service still had resources in the VPC. The cleanup process has been updated so that this conflict does not occur.

RHODS-2747 - Images were incorrectly updated after upgrading OpenShift AI

After the process to upgrade OpenShift AI completed, Jupyter failed to update its notebook images. This was due to an issue with the image caching mechanism. Images are now correctly updating after an upgrade.

RHODS-2425 - Incorrect TensorFlow and TensorBoard versions displayed during notebook selection

The Start a notebook server page displayed incorrect version numbers (2.4.0) for TensorFlow and TensorBoard in the TensorFlow notebook image. These versions have been corrected to TensorFlow 2.7.0 and TensorBoard 2.6.0.

RHODS-24339 - Quick start links did not display for enabled applications

For some applications, the Open quick start link failed to display on the application tile on the Enabled page. As a result, users did not have direct access to the quick start tour for the relevant application.

RHODS-2215 - Incorrect Python versions displayed during notebook selection

The Start a notebook server page displayed incorrect versions of Python for the TensorFlow and PyTorch notebook images. Additionally, the third integer of package version numbers is now no longer displayed.

RHODS-1977 - Ten minute wait after notebook server start fails

If the Jupyter leader pod failed while the notebook server was being started, the user could not access their notebook server until the pod restarted, which took approximately ten minutes. This process has been improved so that the user is redirected to their server when a new leader pod is elected. If this process times out, users see a 504 Gateway Timeout error, and can refresh to access their server.

Chapter 7. Known issues

This section describes known issues in Red Hat OpenShift AI and any known methods of working around these issues.

RHOAIENG-35623 - Model deployment fails when using hardware profiles

Model deployments that use hardware profiles fail because the Red Hat OpenShift AI Operator does not inject the tolerations, nodeSelector, or identifiers from the hardware profile into the underlying InferenceService when manually creating InferenceService resources. As a result, the model deployment pods cannot be scheduled to suitable nodes and the deployment fails to enter a ready state. Workbenches that use the same hardware profile continue to deploy successfully.

Workaround
Run a script to manually inject the tolerations, nodeSelector, or identifiers from the hardware profile into the underlying InferenceService as described in the Knowledgebase solution Workaround for model deployment failure when using hardware profiles.

RHOAIENG-33995 - Deployment of an inference service for Phi and Mistral models fails

The creation of an inference service for Phi and Mistral models using vLLM runtime on IBM Power cluster with {openshift-platform-url} 4.19 fails due to an error related to CPU backend. As a result, deployment of these models is affected, causing inference service creation failure.

Workaround
To resolve this issue, disable the sliding_window mechanism in the serving runtime if it is enabled for CPU and Phi models. Sliding window is not currently supported in V1.

RHOAIENG-33914 - LM-Eval Tier2 task test failures

There can be some failures with LM-Eval Tier2 task tests because the Massive Multitask Language Understanding Symbol Replacement (MMLUSR) tasks are broken, if you are using an older version of the trustyai-service-operator.

Workaround
Ensure that the latest version of the trustyai-service-operator is installed.

RHOAIENG-33795 - Manual Route creation needed for gRPC endpoint verification for Triton Inference Server on IBM Z

When verifying Triton Inference Server with gRPC endpoint, Route does not get created automatically. This happens because the Operator currently defaults to creating an edge-terminated route for REST only.

Workaround

To resolve this issue, manual Route creation is needed for gRPC endpoint verification for Triton Inference Server on IBM Z.

  1. When the model deployment pod is up and running, define an edge-terminated Route object in a YAML file with the following contents:

    apiVersion: route.openshift.io/v1
    kind: Route
    metadata:
      name: <grpc-route-name>                  # e.g. triton-grpc
      namespace: <model-deployment-namespace>  # namespace where your model is deployed
      labels:
        inferenceservice-name: <inference-service-name>
      annotations:
        haproxy.router.openshift.io/timeout: 30s
    spec:
      host: <custom-hostname>                  # e.g. triton-grpc.<apps-domain>
      to:
        kind: Service
        name: <service-name>                   # name of the predictor service (e.g. triton-predictor)
        weight: 100
      port:
        targetPort: grpc                       # must match the gRPC port exposed by the service
      tls:
        termination: edge
      wildcardPolicy: None
    Copy to Clipboard Toggle word wrap
  2. Create the Route object:

    oc apply -f <route-file-name>.yaml
    Copy to Clipboard Toggle word wrap
  3. To send an inference request, enter the following command:

    grpcurl -cacert <ca_cert_file>\ 
    1
    
      -protoset triton_desc.pb \
      -d '{
        "model_name": "<model_name>",
        "inputs": [
          {
            "name": "<input_tensor_name>",
            "shape": [<shape>],
            "datatype": "<data_type>",
            "contents": {
              "<datatype_specific_contents>": [<input_data_values>]
            }
          }
        ],
        "outputs": [
          {
            "name": "<output_tensor_name>"
          }
        ]
      }' \
      <grpc_route_host>:443 \
      inference.GRPCInferenceService/ModelInfer
    Copy to Clipboard Toggle word wrap
    1
    <ca_cert_file> is the path to your cluster router CA cert (for example, router-ca.crt).
Note

<triton_protoset_file> is compiled as a protobuf descriptor file. You can generate it as protoc -I. --descriptor_set_out=triton_desc.pb --include_imports grpc_service.proto.

Download grpc_service.proto and model_config.proto files from the triton-inference-service GitHub page.

RHOAIENG-33697 - Unable to Edit or Delete models unless status is "Started"

When you deploy a model on the NVIDIA NIM or single-model serving platform, the Edit and Delete options in the action menu are not available for models in the Starting or Pending states. These options become available only after the model has been successfully deployed.

Workaround
Wait until the model is in the Started state to make any changes or to delete the model.

RHOAIENG-33645 - LM-Eval Tier1 test failures

There can be failures with LM-Eval Tier1 tests because confirm_run_unsafe_code is not passed as an argument when a job is run, if you are using an older version of the trustyai-service-operator.

Workaround
Ensure that you are using the latest version of the trustyai-service-operator and that AllowCodeExecution is enabled.

RHOAIENG-32942 - Elyra pipelines fail when pipeline store is set to Kubernetes

When the pipeline store is configured to use Kubernetes, Elyra requires equality (eq) filters that are not supported by the REST API. Only substring filters are supported in this mode. As a result, pipelines created and submitted through Elyra from a workbench cannot run successfully. Submissions fail with the following error:

Invalid input error: Filter eq is not implemented for Kubernetes pipeline store.
Copy to Clipboard Toggle word wrap
Workaround

Configure the pipeline server to use the database instead of Kubernetes for storing pipelines:

  • When creating a pipeline server, set the pipeline store to database.
  • If the server is already created, update the DataSciencePipelinesApplication custom resource by setting .spec.pipelineStore to database. This triggers the dspa pod to be recreated.

After switching the pipeline store to database, Elyra pipelines can be submitted successfully from a workbench.

RHOAIENG-32897 - Pipelines defined with the Kubernetes API and invalid platformSpec do not appear in the UI or run

When a pipeline version defined with the Kubernetes API includes an empty or invalid spec.platformSpec field (for example, {} or missing the kubernetes key), the system misidentifies the field as the pipeline specification. As a result, the REST API omits the pipelineSpec, which prevents the pipeline version from being displayed in the UI and from running.

Workaround
Remove the spec.platformSpec field from the PipelineVersion object. After removing the field, the pipeline version is displayed correctly in the UI and the REST API returns the pipelineSpec as expected.

RHOAIENG-31386 - Error deploying an Inference Service with authenticationRef

When deploying an InferenceService with authenticationRef under external metrics, the authenticationRef field is removed after the first oc apply.

Workaround
Re-apply the resource to retain the field.

RHOAIENG-30493 - Error creating a workbench in a Kueue-enabled project

When using the dashboard to create a workbench in a Kueue-enabled project, the creation fails if Kueue is disabled on the cluster or if the selected hardware profile is not associated with a LocalQueue. In this case, the required LocalQueue cannot be referenced, the admission webhook validation fails, and the following error message is shown:

Error creating workbench
admission webhook "kubeflow-kueuelabels-validator.opendatahub.io" denied the request: Kueue label validation failed: missing required label "kueue.x-k8s.io/queue-name"
Copy to Clipboard Toggle word wrap
Workaround

Enable Kueue and hardware profiles on your cluster as a user with cluster-admin permissions:

  1. Log in to your cluster by using the oc client.
  2. Run the following command to patch the OdhDashboardConfig custom resource in the redhat-ods-applications namespace:
oc patch OdhDashboardConfig odh-dashboard-config -n redhat-ods-applications --type=merge -p '{"spec": {"dashboardConfig": {"disableKueue": false, "disableHardwareProfiles": false}}}'
Copy to Clipboard Toggle word wrap

RHOAIENG-31238 - New observability stack enabled when creating DSCInitialization

When you remove a DSCInitialization resource and create a new one using OpenShift AI console form view, it enables a Technology Preview observability stack. This results in the deployment of an unwanted observability stack when recreating a DSCInitialization resource.

Workaround

To resolve this issue, manually remove the "metrics" and "traces" fields when recreating the DSCInitiliazation resource using the form view.

This is not required if you want to use the Technology Preview observability stack.

RHOAIENG-32145 - Llama Stack Operator deployment failures on OpenShift versions earlier than 4.17

When installing OpenShift AI on OpenShift clusters running versions earlier than 4.17, the integrated Llama Stack Operator (llamastackoperator) might fail to deploy.

The Llama Stack Operator requires Kubernetes version 1.32 or later, but OpenShift 4.15 uses Kubernetes 1.28. This version gap can cause schema validation failures when applying the LlamaStackDistribution custom resource definition (CRD), due to unsupported selectable fields introduced in Kubernetes 1.32.

Workaround
Install OpenShift AI on an OpenShift cluster running version 4.17 or later.

RHOAIENG-32242 - Failure on creating NetworkPolicies for OpenShift versions 4.15 and 4.16

When installing OpenShift AI on OpenShift clusters running versions 4.15 or 4.16, deployment of certain NetworkPolicy resources might fail. This can occur when the llamastackoperator or related components attempt to create a NetworkPolicy in a protected namespace, such as redhat-ods-applications. The request can be blocked by the admission webhook networkpolicies-validation.managed.openshift.io, which restricts modifications to certain namespaces and resources, even for cluster-admin users. This restriction can apply to both self-managed and Red Hat–managed OpenShift environments.

Workaround
Deploy OpenShift AI on an OpenShift cluster running version 4.17 or later. For clusters where the webhook restriction is enforced, contact your OpenShift administrator or Red Hat Support to determine an alternative deployment pattern or approved change to the affected namespace.

RHOAIENG-32599 - Inference service creation fails on IBM Z cluster

When you attempt to create an inference service using the vLLM runtime on an IBM Z cluster, it fails with the following error: ValueError: 'aimv2' is already used by a Transformers config, pick another name.

Workaround
None.

RHOAIENG-29731 - Inference service creation fails on IBM Power cluster with OpenShift 4.19

When you attempt to create an inference service by using the vLLM runtime on an IBM Power cluster on OpenShift Container Platform version 4.19, it fails due to an error related to Non-Uniform Memory Access (NUMA).

Workaround
When you create an inference service, set the environment variable VLLM_CPU_OMP_THREADS_BIND to all.

RHOAIENG-29292 - vLLM logs permission errors on IBM Z due to usage stats directory access

When running vLLM on the IBM Z architecture, the inference service starts successfully, but logs an error in a background thread related to usage statistics reporting. This happens because the service tries to write usage data to a restricted location (/.config), which it does not have permission to access.

The following error appears in the logs:

Exception in thread Thread-2 (_report_usage_worker):
Traceback (most recent call last):
 ...
PermissionError: [Error 13] Permission denied: '/.config'
Copy to Clipboard Toggle word wrap
Workaround
To prevent this error and suppress the usage stats logging, set the VLLM_NO_USAGE_STATS=1 environment variable in the inference service deployment. This disables automatic usage reporting, avoiding permission issues when you write to system directories.

RHOAIENG-28910 - Unmanaged KServe resources are deleted after upgrading from 2.16 to 2.19 or later

During the upgrade from OpenShift AI 2.16 to 1, the FeatureTracker custom resource (CR) is deleted before its owner references are fully removed from associated KServe-related resources. As a result, resources that were originally created by the Red Hat OpenShift AI Operator with a Managed state and later changed to Unmanaged in the DataScienceCluster (DSC) custom resource (CR) might be unintentionally removed. This issue can disrupt model serving functionality until the resources are manually restored.

The following resources might be deleted in 1 if they were changed to Unmanaged in 2.16:

Expand
KindNamespaceName

KnativeServing

knative-serving

knative-serving

ServiceMeshMember

knative-serving

default

Gateway

istio-system

kserve-local-gateway

Gateway

knative-serving

knative-ingress-gateway

Gateway

knative-serving

knative-local-gateway

Workaround

If you have already upgraded from OpenShift AI 2.16 to 1, perform one of the following actions:

  • If you have an existing backup, manually recreate the deleted resources without owner references to the FeatureTracker CR.
  • If you do not have an existing backup, you can use the Operator to recreate the deleted resources:

    1. Back up any resources you have already recreated.
    2. In the DSC, set spec.components.kserve.serving.managementState to Managed, and then save the change to allow the Operator to recreate the resources.

      Wait until the Operator has recreated the resources.

    3. In the DSC, set spec.components.kserve.serving.managementState back to Unmanaged, and then save the change.
    4. Reapply any previous custom changes to the recreated KnativeServing, ServiceMeshMember, and Gateway CRs resources.

If you have not yet upgraded, perform the following actions before upgrading to prevent this issue:

  1. In the DSC, set spec.components.kserve.serving.managementState to Unmanaged.
  2. For each of the affected KnativeServing, ServiceMeshMember, and Gateway resources listed in the above table, edit its CR by deleting the FeatureTracker owner reference. This edit removes the resource’s dependency on the FeatureTracker and prevents the deletion of the resource during the upgrade process.

RHOAIENG-24545 - Runtime images are not present in the workbench after the first start

The list of runtime images does not properly populate the first running workbench instance in the namespace, therefore no image is shown for selection in the Elyra pipeline editor.

Workaround
Restart the workbench. After restarting the workbench, the list of runtime images populates both the workbench and the select box for the Elyra pipeline editor.

RHOAIENG-25090 - InstructLab prerequisites-check-op task fails when the model registration option is disabled

When you start a LAB-tuning run without selecting the Add model to <model registry name> checkbox, the InstructLab pipeline starts, but the prerequisites-check-op task fails with the following error in the pod logs:

failed: failed to resolve inputs: the resolved input parameter is null: output_model_name
Copy to Clipboard Toggle word wrap
Workaround
Select the Add model to <model registry name> checkbox when you configure the LAB-tuning run.

RHOAIENG-20209 - Warning message not displayed when requested resources exceed threshold

When you click Distributed workloadsProject metrics and view the Requested resources section, the charts show the requested resource values and the total shared quota value for each resource (CPU and Memory). However, when the Requested by all projects value exceeds the Warning threshold value for that resource, the expected warning message is not displayed.

Workaround
None.

SRVKS-1301 (previously documented as RHOAIENG-18590) - The KnativeServing resource fails after disabling and enabling KServe

After disabling and enabling the kserve component in the DataScienceCluster, the KnativeServing resource might fail.

Workaround

Delete all ValidatingWebhookConfiguration and MutatingWebhookConfiguration webhooks related to Knative:

  1. Get the webhooks:

    oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration | grep -i knative
    Copy to Clipboard Toggle word wrap
  2. Ensure KServe is disabled.
  3. Get the webhooks:

    oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration | grep -i knative
    Copy to Clipboard Toggle word wrap
  4. Delete the webhooks.
  5. Enable KServe.
  6. Verify that the KServe pod can successfully spawn, and that pods in the knative-serving namespace are active and operational.

RHOAIENG-16247 - Elyra pipeline run outputs are overwritten when runs are launched from OpenShift AI dashboard

When a pipeline is created and run from Elyra, outputs generated by the pipeline run are stored in the folder bucket-name/pipeline-name-timestamp of object storage.

When a pipeline is created from Elyra and the pipeline run is started from the OpenShift AI dashboard, the timestamp value is not updated. This can cause pipeline runs to overwrite files created by previous pipeline runs of the same pipeline.

This issue does not affect pipelines compiled and imported using the OpenShift AI dashboard because runid is always added to the folder used in object storage. For more information about storage locations used in data science pipelines, see Storing data with data science pipelines.

Workaround
When storing files in an Elyra pipeline, use different subfolder names on each pipeline run.

RHOAIENG-8294 - CodeFlare error when upgrading OpenShift AI 2.8 to version 2.10 or later

If you try to upgrade OpenShift AI 2.8 to version 2.10 or later, the following error message is shown for the CodeFlare component, due to a mismatch with the AppWrapper custom resource definition (CRD) version.

ReconcileCompletedWithComponentErrors DataScienceCluster resource reconciled with component errors: 1 error occurred: * CustomResourceDefinition.apiextensions.k8s.io "appwrappers.workload.codeflare.dev" is invalid: status.storedVersions[0]: Invalid value: "v1beta1": must appear in spec.versions
Copy to Clipboard Toggle word wrap
Workaround
  1. Delete the existing AppWrapper CRD:

    $ oc delete crd appwrappers.workload.codeflare.dev
    Copy to Clipboard Toggle word wrap
  2. Wait for about 20 seconds, and then ensure that a new AppWrapper CRD is automatically applied, as shown in the following example:

    $ oc get crd appwrappers.workload.codeflare.dev
    NAME                                 CREATED AT
    appwrappers.workload.codeflare.dev   2024-11-22T18:35:04Z
    Copy to Clipboard Toggle word wrap

RHOAIENG-7716 - Pipeline condition group status does not update

When you run a pipeline that has loops (dsl.ParallelFor) or condition groups (dsl.lf), the UI displays a Running status for the loops and groups, even after the pipeline execution is complete.

Workaround

You can confirm if a pipeline is still running by checking that no child tasks remain active.

  1. From the OpenShift AI dashboard, click Data Science PipelinesRuns.
  2. From the Project list, click your data science project.
  3. From the Runs tab, click the pipeline run that you want to check the status of.
  4. Expand the condition group and click a child task.

    A panel that contains information about the child task is displayed

  5. On the panel, click the Task details tab.

    The Status field displays the correct status for the child task.

RHOAIENG-6409 - Cannot save parameter errors appear in pipeline logs for successful runs

When you run a pipeline more than once with data science pipelines 2.0, Cannot save parameter errors appear in the pipeline logs for successful pipeline runs. You can safely ignore these errors.

Workaround
None.

RHOAIENG-12294 (previously documented as RHOAIENG-4812) - Distributed workload metrics exclude GPU metrics

In this release of OpenShift AI, the distributed workload metrics exclude GPU metrics.

Workaround
None.

RHOAIENG-4570 - Existing Argo Workflows installation conflicts with install or upgrade

Data science pipelines 2.0 contains an installation of Argo Workflows. Red Hat does not support direct customer use of this instance of Argo Workflows. To install or upgrade OpenShift AI with data science pipelines 2.0, ensure that there is no existing installation of Argo Workflows on your cluster. For more information, see Migrating to data science pipelines 2.0.

Workaround
Remove the existing Argo Workflows installation or set datasciencepipelines to Removed, and then proceed with the installation or upgrade.

RHOAIENG-3025 - OVMS expected directory layout conflicts with the KServe StoragePuller layout

When you use the OpenVINO Model Server (OVMS) runtime to deploy a model on the single-model serving platform (which uses KServe), there is a mismatch between the directory layout expected by OVMS and that of the model-pulling logic used by KServe. Specifically, OVMS requires the model files to be in the /<mnt>/models/1/ directory, while KServe places them in the /<mnt>/models/ directory.

Workaround

Perform the following actions:

  1. In your S3-compatible storage bucket, place your model files in a directory called 1/, for example, /<s3_storage_bucket>/models/1/<model_files>.
  2. To use the OVMS runtime to deploy a model on the single-model serving platform, choose one of the following options to specify the path to your model files:

    • If you are using the OpenShift AI dashboard to deploy your model, in the Path field for your data connection, use the /<s3_storage_bucket>/models/ format to specify the path to your model files. Do not specify the 1/ directory as part of the path.
    • If you are creating your own InferenceService custom resource to deploy your model, configure the value of the storageURI field as /<s3_storage_bucket>/models/. Do not specify the 1/ directory as part of the path.

KServe pulls model files from the subdirectory in the path that you specified. In this case, KServe correctly pulls model files from the /<s3_storage_bucket>/models/1/ directory in your S3-compatible storage.

RHOAIENG-3018 - OVMS on KServe does not expose the correct endpoint in the dashboard

When you use the OpenVINO Model Server (OVMS) runtime to deploy a model on the single-model serving platform, the URL shown in the Inference endpoint field for the deployed model is not complete.

Workaround
To send queries to the model, you must add the /v2/models/_<model-name>_/infer string to the end of the URL. Replace _<model-name>_ with the name of your deployed model.

RHOAIENG-2602 - “Average response time" server metric graph shows multiple lines due to ModelMesh pod restart

The Average response time server metric graph shows multiple lines if the ModelMesh pod is restarted.

Workaround
None.

RHOAIENG-2228 - The performance metrics graph changes constantly when the interval is set to 15 seconds

On the Endpoint performance tab of the model metrics screen, if you set the Refresh interval to 15 seconds and the Time range to 1 hour, the graph results change continuously.

Workaround
None.

RHOAIENG-2183 - Endpoint performance graphs might show incorrect labels

In the Endpoint performance tab of the model metrics screen, the graph tooltip might show incorrect labels.

Workaround
None.

RHOAIENG-131 - gRPC endpoint not responding properly after the InferenceService reports as Loaded

When numerous InferenceService instances are generated and directed requests, Service Mesh Control Plane (SMCP) becomes unresponsive. The status of the InferenceService instance is Loaded, but the call to the gRPC endpoint returns with errors.

Workaround
Edit the ServiceMeshControlPlane custom resource (CR) to increase the memory limit of the Istio egress and ingress pods.

RHOAIENG-1619 (previously documented as DATA-SCIENCE-PIPELINES-165) - Poor error message when S3 bucket is not writable

When you set up a data connection and the S3 bucket is not writable, and you try to upload a pipeline, the error message Failed to store pipelines is not helpful.

Workaround
Verify that your data connection credentials are correct and that you have write access to the bucket you specified.

RHOAIENG-1207 (previously documented as ODH-DASHBOARD-1758) - Error duplicating OOTB custom serving runtimes several times

If you duplicate a model-serving runtime several times, the duplication fails with the Serving runtime name "<name>" already exists error message.

Workaround
Change the metadata.name field to a unique value.

RHOAIENG-133 - Existing workbench cannot run Elyra pipeline after workbench restart

If you use the Elyra JupyterLab extension to create and run data science pipelines within JupyterLab, and you configure the pipeline server after you created a workbench and specified a workbench image within the workbench, you cannot execute the pipeline, even after restarting the workbench.

Workaround
  1. Stop the running workbench.
  2. Edit the workbench to make a small modification. For example, add a new dummy environment variable, or delete an existing unnecessary environment variable. Save your changes.
  3. Restart the workbench.
  4. In the left sidebar of JupyterLab, click Runtimes.
  5. Confirm that the default runtime is selected.

RHODS-12798 - Pods fail with "unable to init seccomp" error

Pods fail with CreateContainerError status or Pending status instead of Running status, because of a known kernel bug that introduced a seccomp memory leak. When you check the events on the namespace where the pod is failing, or run the oc describe pod command, the following error appears:

runc create failed: unable to start container process: unable to init seccomp: error loading seccomp filter into kernel: error loading seccomp filter: errno 524
Copy to Clipboard Toggle word wrap
Workaround
Increase the value of net.core.bpf_jit_limit as described in the Red Hat Knowledgebase solution Pods failing with error loading seccomp filter into kernel: errno 524 in OpenShift 4.

KUBEFLOW-177 - Bearer token from application not forwarded by OAuth-proxy

You cannot use an application as a custom workbench image if its internal authentication mechanism is based on a bearer token. The OAuth-proxy configuration removes the bearer token from the headers, and the application cannot work properly.

Workaround
None.

KUBEFLOW-157 - Logging out of JupyterLab does not work if you are already logged out of the OpenShift AI dashboard

If you log out of the OpenShift AI dashboard before you log out of JupyterLab, then logging out of JupyterLab is not successful. For example, if you know the URL for a Jupyter notebook, you are able to open this again in your browser.

Workaround
Log out of JupyterLab before you log out of the OpenShift AI dashboard.

RHODS-7718 - User without dashboard permissions is able to continue using their running workbenches indefinitely

When a Red Hat OpenShift AI administrator revokes a user’s permissions, the user can continue to use their running workbenches indefinitely.

Workaround
When the OpenShift AI administrator revokes a user’s permissions, the administrator should also stop any running workbenches for that user.

RHOAIENG-1152 (previously documented as RHODS-6356) - The basic-workbench creation process fails for users who have never logged in to the dashboard

The dashboard’s Administration page for basic workbenches displays users who belong to the user group and admin group in OpenShift. However, if an administrator attempts to start a basic workbench on behalf of a user who has never logged in to the dashboard, the basic-workbench creation process fails and displays the following error message:

Request invalid against a username that does not exist.
Copy to Clipboard Toggle word wrap
Workaround
Request that the relevant user logs into the dashboard.

RHODS-5543 - When using the NVIDIA GPU Operator, more nodes than needed are created by the Node Autoscaler

When a pod cannot be scheduled due to insufficient available resources, the Node Autoscaler creates a new node. There is a delay until the newly created node receives the relevant GPU workload. Consequently, the pod cannot be scheduled and the Node Autoscaler’s continuously creates additional new nodes until one of the nodes is ready to receive the GPU workload. For more information about this issue, see the Red Hat Knowledgebase solution When using the NVIDIA GPU Operator, more nodes than needed are created by the Node Autoscaler.

Workaround
Apply the cluster-api/accelerator label in machineset.spec.template.spec.metadata. This causes the autoscaler to consider those nodes as unready until the GPU driver has been deployed.

RHODS-4799 - Tensorboard requires manual steps to view

When a user has TensorFlow or PyTorch workbench images and wants to use TensorBoard to display data, manual steps are necessary to include environment variables in the workbench environment, and to import those variables for use in your code.

Workaround

When you start your basic workbench, use the following code to set the value for the TENSORBOARD_PROXY_URL environment variable to use your OpenShift AI user ID.

import os
os.environ["TENSORBOARD_PROXY_URL"]= os.environ["NB_PREFIX"]+"/proxy/6006/"
Copy to Clipboard Toggle word wrap

RHODS-4718 - The Intel® oneAPI AI Analytics Toolkits quick start references nonexistent sample notebooks

The Intel® oneAPI AI Analytics Toolkits quick start, located on the Resources page on the dashboard, requires the user to load sample notebooks as part of the instruction steps, but refers to notebooks that do not exist in the associated repository.

Workaround
None.

RHODS-3984 - Incorrect package versions displayed during notebook selection

In the OpenShift AI interface, the Start a notebook server page displays incorrect version numbers for the JupyterLab and Notebook packages included in the oneAPI AI Analytics Toolkit notebook image. The page might also show an incorrect value for the Python version used by this image.

Workaround
When you start your oneAPI AI Analytics Toolkit notebook server, you can check which Python packages are installed on your notebook server and which version of the package you have by running the !pip list command in a notebook cell.

RHOAING-1147 (previously documented as RHODS-2881) - Actions on dashboard not clearly visible

The dashboard actions to revalidate a disabled application license and to remove a disabled application tile are not clearly visible to the user. These actions appear when the user clicks on the application tile’s Disabled label. As a result, the intended workflows might not be clear to the user.

Workaround
None.

RHODS-2096 - IBM Watson Studio not available in OpenShift AI

IBM Watson Studio is not available when OpenShift AI is installed on OpenShift Dedicated 4.9 or higher, because it is not compatible with these versions of OpenShift Dedicated.

Workaround
Contact Marketplace support for assistance manually configuring Watson Studio on OpenShift Dedicated 4.9 and higher.

RHODS-1888 - OpenShift AI hyperlink still visible after uninstall

When the OpenShift AI Add-on is uninstalled from an OpenShift Dedicated cluster, the link to the OpenShift AI interface remains visible in the application launcher menu. Clicking this link results in a "Page Not Found" error because OpenShift AI is no longer available.

Workaround
None.

Chapter 8. Product features

Red Hat OpenShift AI provides a rich set of features for data scientists and cluster administrators. To learn more, see Introduction to Red Hat OpenShift AI.

Legal Notice

Copyright © 2025 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat