Release notes
Features, enhancements, resolved issues, and known issues associated with this release
Abstract
Chapter 1. Overview of OpenShift AI
Red Hat OpenShift AI is a platform for data scientists and developers of artificial intelligence and machine learning (AI/ML) applications.
OpenShift AI provides an environment to develop, train, serve, test, and monitor AI/ML models and applications on-premise or in the cloud.
For data scientists, OpenShift AI includes Jupyter and a collection of default notebook images optimized with the tools and libraries required for model development, and the TensorFlow and PyTorch frameworks. Deploy and host your models, integrate models into external applications, and export models to host them in any hybrid cloud environment. You can enhance your data science projects on OpenShift AI by building portable machine learning (ML) workflows with data science pipelines, using Docker containers. You can also accelerate your data science experiments through the use of graphics processing units (GPUs) and Intel Gaudi AI accelerators.
For administrators, OpenShift AI enables data science workloads in an existing Red Hat OpenShift or ROSA environment. Manage users with your existing OpenShift identity provider, and manage the resources available to notebook servers to ensure data scientists have what they require to create, train, and host models. Use accelerators to reduce costs and allow your data scientists to enhance the performance of their end-to-end data science workflows using graphics processing units (GPUs) and Intel Gaudi AI accelerators.
OpenShift AI has two deployment options:
Self-managed software that you can install on-premise or in the cloud. You can install OpenShift AI Self-Managed in a self-managed environment such as OpenShift Container Platform, or in Red Hat-managed cloud environments such as Red Hat OpenShift Dedicated (with a Customer Cloud Subscription for AWS or GCP), Red Hat OpenShift Service on Amazon Web Services (ROSA Classic or ROSA HCP), or Microsoft Azure Red Hat OpenShift.
For information about OpenShift AI as self-managed software on your OpenShift cluster in a connected or a disconnected environment, see Product Documentation for Red Hat OpenShift AI Self-Managed.
A managed cloud service, installed as an add-on in Red Hat OpenShift Dedicated (with a Customer Cloud Subscription for AWS or GCP) or in Red Hat OpenShift Service on Amazon Web Services (ROSA Classic).
For information about OpenShift AI Cloud Service, see Product Documentation for Red Hat OpenShift AI.
For information about OpenShift AI supported software platforms, components, and dependencies, see Supported configurations.
Chapter 2. New features and enhancements
This section describes new features and enhancements in Red Hat OpenShift AI.
2.1. New Features
- Tabular data drift detection with KServe
As part of ongoing advancements in the Responsible AI space, backend metrics support for drift monitoring is now available in the TrustyAI project. These capabilities align with the TrustyAI framework to ensure secure and reliable AI operations.
Drift monitoring is crucial for maintaining AI model performance and fairness over time. By implementing backend metrics first, this release lays the groundwork for comprehensive drift detection and monitoring, enhancing the trustworthiness and compliance of AI deployments. Future updates will extend these capabilities with a user interface for streamlined interaction.
- Bias and fairness monitoring
- Using TrustyAI, data scientists can now evaluate deployed models for bias and fairness. The update includes both backend functionality and dashboard visualizations.
- Support heterogeneous clusters in distributed workloads
Data scientists can now select specific queues based on their workload requirements, improving efficiency in resource-constrained environments. Administrators can configure these workload queues per cluster and data science project.
Additionally, the dashboard now supports visibility across all queues in heterogeneous clusters. You can view both the assigned queue and resource details for each workload, providing clear insights into workload distribution.
2.2. Enhancements
- Data science pipelines updates
The data science pipelines components are updated to Kubeflow Pipelines 2.2.0. For more information, see the version 2.2.0 changelog.
Additionally, the internal Argo workflows engine was updated to version 3.4.17 and includes security fixes.
Chapter 3. Technology Preview features
This section describes Technology Preview features in Red Hat OpenShift AI. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
- RStudio Server notebook image
With the RStudio Server notebook image, you can access the RStudio IDE, an integrated development environment for R. The R programming language is used for statistical computing and graphics to support data analysis and predictions.
To use the RStudio Server notebook image, you must first build it by creating a secret and triggering the
BuildConfig
, and then enable it in the OpenShift AI UI by editing therstudio-rhel9
image stream. For more information, see Building the RStudio Server workbench images.
Disclaimer: Red Hat supports managing workbenches in OpenShift AI. However, Red Hat does not provide support for the RStudio software. RStudio Server is available through rstudio.org and is subject to their licensing terms. You should review their licensing terms before you use this sample workbench.
- CUDA - RStudio Server notebook image
With the CUDA - RStudio Server notebook image, you can access the RStudio IDE and NVIDIA CUDA Toolkit. The RStudio IDE is an integrated development environment for the R programming language for statistical computing and graphics. With the NVIDIA CUDA toolkit, you can enhance your work by using GPU-accelerated libraries and optimization tools.
To use the CUDA - RStudio Server notebook image, you must first build it by creating a secret and triggering the
BuildConfig
, and then enable it in the OpenShift AI UI by editing therstudio-rhel9
image stream. For more information, see Building the RStudio Server workbench images.
Disclaimer: Red Hat supports managing workbenches in OpenShift AI. However, Red Hat does not provide support for the RStudio software. RStudio Server is available through rstudio.org and is subject to their licensing terms. You should review their licensing terms before you use this sample workbench.
The CUDA - RStudio Server notebook image contains NVIDIA CUDA technology. CUDA licensing information is available in the CUDA Toolkit documentation. You should review their licensing terms before you use this sample workbench.
- code-server workbench image
Red Hat OpenShift AI now includes the
code-server
workbench image. See code-server in GitHub for more information.With the
code-server
workbench image, you can customize your workbench environment by using a variety of extensions to add new languages, themes, debuggers, and connect to additional services. You can also enhance the efficiency of your data science work with syntax highlighting, auto-indentation, and bracket matching.
Elyra-based pipelines are not available with the code-server
workbench image.
The code-server
workbench image is currently available in Red Hat OpenShift AI as a Technology Preview feature.
- Support for AMD GPUs
- The AMD ROCm workbench image adds support for the AMD graphics processing units (GPU) Operator, significantly boosting the processing performance for compute-intensive activities. This provides you with access to drivers, development tools, and APIs that support AI workloads and a wide range of models. Additionally, the AMD ROCm workbench image includes machine learning libraries to support AI frameworks such as TensorFlow and PyTorch. The Technology Preview release also enables access to images that can be used to explore serving and training/tuning use cases with AMD GPUs.
- Model Registry
- OpenShift AI now supports the Model Registry Operator. The Model Registry Operator is not installed by default in Technology Preview mode. The model registry is a central repository that contains metadata related to machine learning models from inception to deployment.
- NVIDIA NIM model serving platform
- With the NVIDIA NIM model serving platform, you can deploy NVIDIA optimized models using NVIDIA NIM inference services in OpenShift AI. NVIDIA NIM, part of NVIDIA AI Enterprise, is a set of easy-to-use microservices designed for secure, reliable deployment of high-performance AI model inferencing across the cloud, data center and workstations. Supporting a wide range of AI models, including open-source community and NVIDIA AI Foundation models, it ensures seamless, scalable AI inferencing, on-premises or in the cloud, leveraging industry standard APIs. You need an NVIDIA AI Enterprise license key to enable the NVIDIA NIM model serving platform in OpenShift AI.
For more information, see About the NVIDIA NIM model serving platform.
- OCI containers for model storage
You can use OCI storage as an alternative to cloud storage services for model serving. First, you create an OCI container image to contain the model. The image is uploaded to an OCI-compatible registry, such as Quay. Later, when deploying a model, the model serving platform references the repository of the containerized model.
Using an OCI container can provide the following advantages:
- Reduced startup times, because the cluster keeps a cache of downloaded images. Restarting the model pod does not download the model again.
- Lower disk space usage, because the model is not downloaded on each pod replica, assuming pods are scheduled on the same node.
- Enhanced performance when pre-fetching images or asynchronous loading.
- Compatibility and integration, because it can be easily integrated with KServe. No additional dependencies are required and the infrastructure might already be available.
Chapter 4. Developer Preview features
This section describes Developer Preview features in Red Hat OpenShift AI.
Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to functionality in advance of possible inclusion in a Red Hat product offering. Customers can use these features to test functionality and provide feedback during the development process. Developer Preview features might not have any documentation, are subject to change or removal at any time, and have received limited testing. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
For more information about the support scope of Red Hat Developer Preview features, see Developer Preview Support Scope.
- Support for AppWrapper in Kueue
- AppWrapper support in Kueue is available as a Developer Preview feature. The experimental API enables the use of AppWrapper-based workloads with the distributed workloads feature.
- ROCm-compatible Ray cluster image
-
An additional Ray cluster image
quay.io/modh/ray:2.35.0-py39-rocm61
is available as Developer Preview software. This image is compatible with AMD accelerators that are supported by ROCm 6.1. This image is an AMD64 image, which might not work on other architectures. - Distributed InstructLab training
Distributed InstructLab training is available as a Developer Preview feature, enabling enhanced performance for training tasks on distributed environments compared to single-node setups. This feature improves the training efficiency and scalability of InstructLab, allowing users to leverage distributed resources for more effective AI model development.
Key features:
- Data transfer support: Facilitates the movement of synthetically generated data from InstructLab on Red Hat Enterprise Linux AI to S3-compatible storage for efficient access by distributed training tasks.
- Distributed training execution: Enables orchestration of multi-node InstructLab training jobs, leveraging the performance benefits of distributed infrastructure.
- End-to-end documentation: Comprehensive guidance for users to implement the full InstructLab training flow, including data preparation, transfer, and distributed execution. To access the documentation, see Distributed InstructLab Training on RHOAI.
Chapter 5. Limited Availability features
This section describes Limited Availability features in Red Hat OpenShift AI. Limited Availability means that you can install and receive support for the feature only with specific approval from Red Hat. Without such approval, the feature is unsupported. This applies to all features described in this section.
- Tuning in OpenShift AI
- Tuning in OpenShift AI is available as a Limited Availability feature. The Kubeflow Training Operator and the Hugging Face Supervised Fine-tuning Trainer (SFT Trainer) enable users to fine-tune and train their models easily in a distributed environment. In this release, you can use this feature for models that are based on the PyTorch machine-learning framework.
Chapter 6. Support removals
This section describes major changes in support for user-facing features in Red Hat OpenShift AI. For information about OpenShift AI supported software platforms, components, and dependencies, see Supported configurations.
6.1. Removed functionality
6.1.1. Pipeline logs for Python scripts running in Elyra pipelines are no longer stored in S3
Logs are no longer stored in S3-compatible storage for Python scripts which are running in Elyra pipelines. From OpenShift AI version 2.11, you can view these logs in the pipeline log viewer in the OpenShift AI dashboard.
For this change to take effect, you must use the Elyra runtime images provided in the 2024.1 or 2024.2 workbench images.
If you have an older workbench image version, update the Version selection field to 2024.1
or 2024.2
, as described in Updating a project workbench.
Updating your workbench image version will clear any existing runtime image selections for your pipeline. After you have updated your workbench version, open your workbench IDE and update the properties of your pipeline to select a runtime image.
6.1.2. Data science pipelines v1 upgraded to v2
Previously, data science pipelines in OpenShift AI were based on KubeFlow Pipelines v1. Data science pipelines are now based on KubeFlow Pipelines v2, which uses a different workflow engine. Data science pipelines 2.0 is enabled and deployed by default in OpenShift AI. It is no longer possible to deploy, view, or edit the details of pipelines that are based on data science pipelines 1.0 from the dashboard. For more information, see Enabling data science pipelines 2.0.
Data science pipelines 2.0 contains an installation of Argo Workflows. OpenShift AI does not support direct customer usage of this installation of Argo Workflows. To install or upgrade to OpenShift AI with data science pipelines 2.0, ensure that there is no existing installation of Argo Workflows on your cluster.
If you want to use existing pipelines and workbenches with data science pipelines 2.0 after upgrading OpenShift AI, you must update your workbenches to use the 2024.1 notebook image version or later and then manually migrate your pipelines from data science pipelines 1.0 to 2.0. For more information, see Upgrading to data science pipelines 2.0.
6.1.3. Version 1.2 notebook container images for workbenches are no longer supported
When you create a workbench, you specify a notebook container image to use with the workbench. Starting with OpenShift AI 2.5, when you create a new workbench, version 1.2 notebook container images are not available to select. Workbenches that are already running with a version 1.2 notebook image continue to work normally. However, Red Hat recommends that you update your workbench to use the latest notebook container image.
6.1.4. NVIDIA GPU Operator replaces NVIDIA GPU add-on
Previously, to enable graphics processing units (GPUs) to help with compute-heavy workloads, you installed the NVIDIA GPU add-on. OpenShift AI no longer supports this add-on.
Now, to enable GPU support, you must install the NVIDIA GPU Operator. To learn how to install the GPU Operator, see NVIDIA GPU Operator on Red Hat OpenShift Container Platform (external).
6.1.5. Kubeflow Notebook Controller replaces JupyterHub
In OpenShift AI 1.15 and earlier, JupyterHub was used to create and launch notebook server environments. In OpenShift AI 1.16 and later, JupyterHub is no longer included, and its functionality is replaced by Kubeflow Notebook Controller.
This change provides the following benefits:
- Users can now immediately cancel a request, make changes, and retry the request, instead of waiting 5+ minutes for the initial request to time out. This means that users do not wait as long when requests fail, for example, when a notebook server does not start correctly.
- The architecture no longer prevents a single user from having more than one notebook server session, expanding future feature possibilities.
- The removal of the PostgreSQL database requirement allows for future expanded environment support in OpenShift AI.
However, this update also creates the following behavior changes:
- For IT Operations administrators, the notebook server administration interface does not currently allow login access to data scientist users' notebook servers. This is planned to be added in future releases.
- For data scientists, the JupyterHub interface URL is no longer valid. Update your bookmarks to point to the OpenShift AI Dashboard.
The JupyterLab interface is unchanged and data scientists can continue to use JupyterLab to work with their notebook files as usual.
6.1.6. HabanaAI workbench image removal
Support for the HabanaAI 1.10 workbench image has been removed. New installations of OpenShift AI from version 2.14 do not include the HabanaAI workbench image. However, if you upgrade OpenShift AI from a previous version, the HabanaAI workbench image remains available, and existing HabanaAI workbench images continue to function.
6.2. Deprecated functionality
6.2.1. Deprecated cluster configuration parameters
When using the CodeFlare SDK to run distributed workloads in Red Hat OpenShift AI, the following parameters in the Ray cluster configuration are now deprecated and should be replaced with the new parameters as indicated.
Deprecated parameter | Replaced by |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
You can also use the new extended_resource_mapping
and overwrite_default_resource_mapping
parameters, as appropriate. For more information about these new parameters, see the CodeFlare SDK documentation (external).
Chapter 7. Resolved issues
The following notable issues are resolved in Red Hat OpenShift AI.
RHOAIENG-14571 - Data Science Pipelines API Server unreachable in managed IBM Cloud OpenShift OpenShift AI installation
Previously, when configuring a data science pipeline server, communication errors that prevented successful interaction with the pipeline server occurred. This issue is now resolved.
RHOAIENG-14195 - Ray cluster creation fails when deprecated head_memory parameter is used
Previously, if you included the deprecated head_memory
parameter in your Ray cluster configuration, the Ray cluster creation failed. This issue is now resolved.
RHOAIENG-11895 - Unable to clone a GitHub repo in JupyterLab when configuring a custom CA bundle using |-
Previously, if you configured a custom Certificate Authority (CA) bundle in the DSCInitialization
(DSCI) object using |-
, cloning a repo from JupyterLab failed. This issue is now resolved.
RHOAIENG-1132 (previously documented as RHODS-6383) - An ImagePullBackOff
error message is not displayed when required during the workbench creation process
Previously, pods experienced issues pulling container images from the container registry. When an error occurred, the relevant pod entered into an ImagePullBackOff
state. During the workbench creation process, if an ImagePullBackOff
error occurred, an appropriate message was not displayed. This issue is now resolved.
RHOAIENG-13327 - Importer component (dsl.importer) prevents pipelines from running
Pipelines could not run when using the data science pipelines importer component, dsl.importer
. This issue is now resolved.
RHOAIENG-14652 - kfp-client
unable to connect to the pipeline server on OCP 4.16 and later
In OpenShift 4.16 and later FIPS clusters, data science pipelines were accessible through the OpenShift AI Dashboard. However, connections to the pipelines API server from the KFP SDK failed due to a TLS handshake error. This issue is now resolved.
RHOAIENG-10129 - Notebook and Ray cluster with matching names causes secret resolution failure
Previously, if you created a notebook and a Ray cluster that had matching names in the same namespace, one controller failed to resolve its secret because the secret already had an owner. This issue is now resolved.
RHOAIENG-7887 - Kueue fails to monitor RayCluster or PyTorchJob resources
Previously, when you created a DataScienceCluster
CR with all components enabled, the Kueue component was installed before the Ray component and the Training Operator component. As a result, the Kueue component did not monitor RayCluster
or PyTorchJob
resources. When a user created RayCluster
or PyTorchJob
resources, Kueue did not control the admission of those resources. This issue is now resolved.
RHOAIENG-583 (previously documented as RHODS-8921 and RHODS-6373) - You cannot create a pipeline server or start a workbench when cumulative character limit is exceeded
When the cumulative character limit of a data science project name and a pipeline server name exceeded 62 characters, you were unable to successfully create a pipeline server. Similarly, when the cumulative character limit of a data science project name and a workbench name exceeded 62 characters, workbenches failed to start. This issue is now resolved.
Incorrect logo on dashboard after upgrading
Previously, after upgrading from OpenShift AI 2.11 to OpenShift AI 2.12, the dashboard could incorrectly display the Open Data Hub logo instead of the Red Hat OpenShift AI logo. This issue is now resolved.
RHOAIENG-11297 - Authentication failure after pipeline run
Previously, during the execution of a pipeline run, a connection error could occur due to a certificate authentication failure. This certificate authentication failure could be caused by the use of a multi-line string separator for customCABundle
in the default-dsci
object, which was not supported by data science pipelines. This issue is now resolved.
RHOAIENG-11232 - Distributed workloads: Kueue alerts do not provide runbook link
After a Kueue alert fires, the cluster administrator can click Observe → Alerting → Alerts and click the name of the alert to open its Alert details page. On the Alert details page, the Runbook section now provides a link to the appropriate runbook to help to diagnose and resolve the issues that triggered the alert. Previously, the runbook link was missing.
RHOAIENG-10665 - Unable to query Speculating with a draft model for granite model
Previously, you could not use speculative decoding on the granite-7b
model and granite-7b-accelerator
draft model. When querying these models, the queries failed with an internal error. This issue is now resolved.
RHOAIENG-9481 - Pipeline runs menu glitches when clicking action menu
Previously, when you clicked the action menu (⋮) next to a pipeline run on the Experiments > Experiments and runs page, the menu that appeared was not fully visible, and you had to scroll to see all of the menu items. This issue is now resolved.
RHOAIENG-8553 - Workbench created with custom image shows !Deleted
flag
Previously, if you disabled the internal image registry on your OpenShift cluster and then created a workbench with a custom image that was imported by using the image tag, for example: quay.io/my-wb-images/my-image:tag
, a !Deleted
flag was shown in the Notebook image column on the Workbenches tab of the Data Science Projects page. If you stopped the workbench, you could not restart it. This issue is now resolved.
RHOAIENG-6376 - Pipeline run creation fails after setting pip_index_urls
in a pipeline component to a URL that contains a port number and path
Previously, when you created a pipeline and set the pip_index_urls
value for a component to a URL that contains a port number and path, compiling the pipeline code and then creating a pipeline run could result in an error. This issue is now resolved.
RHOAIENG-4240 - Jobs fail to submit to Ray cluster in unsecured environment
Previously, when running distributed data science workloads from notebooks in an unsecured OpenShift cluster, a ConnectionError: Failed to connect to Ray
error message might be shown. This issue is now resolved.
RHOAIENG-9670 - vLLM container intermittently crashes while processing requests
Previously, if you deployed a model by using the vLLM ServingRuntime for KServe runtime on the single-model serving platform and also configured tensor-parallel-size
, depending on the hardware platform you used, the kserve-container
container would intermittently crash while processing requests. This issue is now resolved.
RHOAIENG-8043 - vLLM errors during generation with mixtral-8x7b
Previously, some models, such as Mixtral-8x7b might have experienced sporadic errors due to a triton issue, such as FileNotFoundError:No such file or directory
. This issue is now resolved.
RHOAIENG-2974 - Data science cluster cannot be deleted without its associated initialization object
Previously, you could not delete a DataScienceCluster
(DSC) object if its associated DSCInitialization
object (DSCI) did not exist. This issue has now been resolved.
RHOAIENG-1205 (previously documented as RHODS-11791) - Usage data collection is enabled after upgrade
Previously, the Allow collection of usage data
option would activate whenever you upgraded OpenShift AI. Now, you no longer need to manually deselect the Allow collection of usage data
option when you upgrade.
RHOAIENG-1204 (previously documented as ODH-DASHBOARD-1771) - JavaScript error during Pipeline step initializing
Previously, the pipeline Run details page stopped working when a run started. This issue has now been resolved.
RHOAIENG-582 (previously documented as ODH-DASHBOARD-1335) - Rename Edit permission to Contributor
On the Permissions tab for a project, the term Edit has been replaced with Contributor to more accurately describe the actions granted by this permission.
For a complete list of updates, see the Errata advisory.
RHOAIENG-8819 - ibm-granite/granite-3b-code-instruct
model fails to deploy on single-model serving platform
Previously, if you tried to deploy the ibm-granite/granite-3b-code-instruct
model on the single-model serving platform by using the vLLM ServingRuntime for KServe
runtime, the model deployment would fail with an error. This issue is now resolved.
RHOAIENG-8218 - Cannot log in to a workbench created on an OpenShift 4.15 cluster without OCP internal image registry
When you create a workbench on an OpenShift cluster that does not have the OpenShift Container Platform internal image registry enabled, the workbench starts successfully, but you cannot log in to it.
This is a known issue with OpenShift 4.15.x versions earlier than 4.15.15. To resolve this issue, upgrade to OpenShift 4.15.15 or later.
RHOAIENG-7346 - Distributed workloads no longer run from existing pipelines after upgrade
Previously, if you tried to upgrade to OpenShift AI 2.10, a distributed workload would no longer run from an existing pipeline if the cluster was created only inside the pipeline. This issue is now resolved.
RHOAIENG-7209 - Error displays when setting the default pipeline root
Previously, if you tried to set the default pipeline root using the data science pipelines SDK or the OpenShift AI user interface, an error would appear. This issue is now resolved.
RHOAIENG-6711 - ODH-model-controller overwrites the spec.memberSelectors
field in ServiceMeshMemberRoll
objects
Previously, if you tried to add a project or namespace to a ServiceMeshMemberRoll
resource using the spec.memberSelectors
field of the ServiceMeshMemberRoll
resource, the ODH-model-controller would overwrite the field. This issue is now resolved.
RHOAIENG-6649 - An error is displayed when viewing a model on a model server that has no external route defined
Previously, if you tried to use the dashboard to deploy a model on a model server that did not have external routes enabled, a t.components is undefined
error message would appear while the model creation was in progress. This issue is now resolved.
RHOAIENG-3981 - In unsecured environment, the functionality to wait for Ray cluster to be ready gets stuck
Previously, when running distributed data science workloads from notebooks in an unsecured OpenShift cluster, the functionality to wait for the Ray cluster to be ready before proceeding (cluster.wait_ready()
) got stuck even when the Ray cluster was ready. This issue is now resolved.
RHOAIENG-2312 - Importing numpy fails in code-server
workbench
Previously, if you tried to import numpy, your code-server workbench would fail. This issue is now resolved.
RHOAIENG-1197 - Cannot create pipeline due to the End date picker in the pipeline run creation page defaulting to NaN values when using Firefox on Linux
Previously, if you tried to create a pipeline with a scheduled recurring run using Firefox on Linux, enabling the End Date parameter would result in Not a number (Nan) values for both the date and time. This issue is now resolved.
RHOAIENG-1196 (previously documented as ODH-DASHBOARD-2140) - Package versions displayed in dashboard do not match installed versions
Previously, the dashboard would display inaccurate version numbers for packages such as JupterLab and Notebook. This issue is now resolved.
RHOAIENG-880 - Default pipelines service account is unable to create Ray clusters
Previously, you could not create Ray clusters using the default pipelines Service Account. This issue is now resolved.
RHOAIENG-52 - Token authentication fails in clusters with self-signed certificates
Previously, if you used self-signed certificates, and you used the Python codeflare-sdk
in a notebook or in a Python script as part of a pipeline, token authentication would fail. This issue is now resolved.
RHOAIENG-7312 - Model serving fails during query with token authentication in KServe
Previously, if you enabled both the ModelMesh and KServe components in your DataScienceCluster
object and added Authorino as an authorization provider, a race condition could occur that resulted in the odh-model-controller
pods being rolled out in a state that is appropriate for ModelMesh, but not for KServe and Authorino. In this situation, if you made an inference request to a running model that was deployed using KServe, you saw a 404 - Not Found
error. In addition, the logs for the odh-model-controller
deployment object showed a Reconciler
error message. This issue is now resolved.
RHOAIENG-7181 (previously documented as RHOAIENG-6343)- Some components are set to Removed
after installing OpenShift AI
Previously, after you installed OpenShift AI, the managementState
field for the codeflare
, kueue
, and ray
components was incorrectly set to Removed
instead of Managed
in the DataScienceCluster
custom resource. This issue is now resolved.
RHOAIENG-7079 (previously documented as RHOAIENG-6317) - Pipeline task status and logs sometimes not shown in OpenShift AI dashboard
Previously, when running pipelines by using Elyra, the OpenShift AI dashboard might not show the pipeline task status and logs, even when the related pods had not been pruned and the information was still available in the OpenShift Console. This issue is now resolved.
RHOAIENG-7070 (previously documented as RHOAIENG-6709) - Jupyter notebook creation might fail when different environment variables specified
Previously, if you started and then stopped a Jupyter notebook, and edited its environment variables in an OpenShift AI workbench, the notebook failed to restart. This issue is now resolved.
RHOAIENG-6853 - Cannot set pod toleration in Elyra pipeline pods
Previously, if you set a pod toleration for an Elyra pipeline pod, the toleration did not take effect. This issue is now resolved.
RHOAIENG-5314 - Data science pipeline server fails to deploy in fresh cluster due to network policies
Previously, if you created a data science pipeline server on a fresh cluster, the user interface remained in a loading state and the pipeline server did not start. This issue is now resolved.
RHOAIENG-4252 - Data science pipeline server deletion process fails to remove ScheduledWorkFlow
resource
Previously, the pipeline server deletion process did not remove the ScheduledWorkFlow
resource. As a result, new DataSciencePipelinesApplications
(DSPAs) did not recognize the redundant ScheduledWorkFlow
resource. This issue is now resolved
RHOAIENG-3411 (previously documented as RHOAIENG-3378) - Internal Image Registry is an undeclared hard dependency for Jupyter notebooks spawn process
Previously, before you could start OpenShift AI notebooks and workbenches, you must have already enabled the internal, integrated container image registry in OpenShift. Attempts to start notebooks or workbenches without first enabling the image registry failed with an "InvalidImageName" error. You can now create and use workbenches in OpenShift AI without enabling the internal OpenShift image registry. If you update a cluster to enable or disable the internal image registry, you must recreate existing workbenches for the registry changes to take effect.
RHOAIENG-2541 - KServe controller pod experiences OOM because of too many secrets in the cluster
Previously, if your OpenShift cluster had a large number of secrets, the KServe controller pod could continually crash due to an out-of-memory (OOM) error. This issue is now resolved.
RHOAIENG-1452 - The Red Hat OpenShift AI Add-on gets stuck
Previously, the Red Hat OpenShift AI Add-on uninstall did not delete OpenShift AI components when the install was triggered via OCM APIs. This issue is now resolved.
RHOAIENG-307 - Removing the DataScienceCluster deletes all OpenShift Serverless CRs
Previously, if you deleted the DataScienceCluster custom resource (CR), all OpenShift Serverless CRs (including knative-serving, deployments, gateways, and pods) were also deleted. This issue is now resolved.
RHOAIENG-6709 - Jupyter notebook creation might fail when different environment variables specified
Previously, if you started and then stopped a Jupyter notebook, and edited its environment variables in an OpenShift AI workbench, the notebook failed to restart. This issue is now resolved.
RHOAIENG-6701 - Users without cluster administrator privileges cannot access the job submission endpoint of the Ray dashboard
Previously, users of the distributed workloads feature who did not have cluster administrator privileges for OpenShift might not have been able to access or use the job submission endpoint of the Ray dashboard. This issue is now resolved.
RHOAIENG-6578 - Request without token to a protected inference point not working by default
Previously, if you added Authorino as an authorization provider for the single-model serving platform and enabled token authorization for models that you deployed, it was still possible to query the models without specifying the tokens. This issue is now resolved.
RHOAIENG-6343 - Some components are set to Removed
after installing OpenShift AI
Previously, after you installed OpenShift AI, the managementState
field for the codeflare
, kueue
, and ray
components was incorrectly set to Removed
instead of Managed
in the DataScienceCluster
custom resource. This issue is now resolved.
RHOAIENG-5067 - Model server metrics page does not load for a model server based on the ModelMesh component
Previously, data science project names that contained capital letters or spaces could cause issues on the model server metrics page for model servers based on the ModelMesh component. The metrics page might not have received data correctly, resulting in a 400 Bad Request
error and preventing the page from loading. This issue is now resolved.
RHOAIENG-4966 - Self-signed certificates in a custom CA bundle might be missing from the odh-trusted-ca-bundle
configuration map
Previously, if you added a custom certificate authority (CA) bundle to use self-signed certificates, sometimes the custom certificates were missing from the odh-trusted-ca-bundle
ConfigMap, or the non-reserved namespaces did not contain the odh-trusted-ca-bundle
ConfigMap when the ConfigMap was set to managed
. This issue is now resolved.
RHOAIENG-4938 (previously documented as RHOAIENG-4327) - Workbenches do not use the self-signed certificates from centrally configured bundle automatically
There are two bundle options to include self-signed certificates in OpenShift AI, ca-bundle.crt
and odh-ca-bundle.crt
. Previously, workbenches did not automatically use the self-signed certificates from the centrally configured bundle and you had to define environment variables that pointed to your certificate path. This issue is now resolved.
RHOAIENG-4572- Unable to run data science pipelines after install and upgrade in certain circumstances
Previously, you were unable to run data science pipelines after installing or upgrading OpenShift AI in the following circumstances:
-
You installed OpenShift AI and you had a valid CA certificate. Within the default-dsci object, you changed the
managementState
field for thetrustedCABundle
field toRemoved
post-installation. - You upgraded OpenShift AI from version 2.6 to version 2.8 and you had a valid CA certificate.
- You upgraded OpenShift AI from version 2.7 to version 2.8 and you had a valid CA certificate.
This issue is now resolved.
RHOAIENG-4524 - BuildConfig definitions for RStudio images contain occurrences of incorrect branch
Previously, the BuildConfig
definitions for the RStudio and CUDA - RStudio workbench images pointed to the wrong branch in OpenShift AI. This issue is now resolved.
RHOAIENG-3963 - Unnecessary managed resource warning
Previously, when you edited and saved the OdhDashboardConfig
custom resource for the redhat-ods-applications
project, the system incorrectly displayed a Managed resource
warning message. This issue is now resolved.
RHOAIENG-2542 - Inference service pod does not always get an Istio sidecar
Previously, when you deployed a model using the single-model serving platform (which uses KServe), the istio-proxy
container could be missing in the resulting pod, even if the inference service had the sidecar.istio.io/inject=true
annotation. This issue is now resolved.
RHOAIENG-1666 - The Import Pipeline button is prematurely accessible
Previously, when you imported a pipeline to a workbench that belonged to a data science project, the Import Pipeline button was accessible before the pipeline server was fully available. This issue is now resolved.
RHOAIENG-673 (previously documented as RHODS-12946) - Cannot install from PyPI mirror in disconnected environment or when using private certificates
In disconnected environments, Red Hat OpenShift AI cannot connect to the public-facing PyPI repositories, so you must specify a repository inside your network. Previously, if you were using private TLS certificates and a data science pipeline was configured to install Python packages, the pipeline run would fail. This issue is now resolved.
RHOAIENG-3355 - OVMS on KServe does not use accelerators correctly
Previously, when you deployed a model using the single-model serving platform and selected the OpenVINO Model Server serving runtime, if you requested an accelerator to be attached to your model server, the accelerator hardware was detected but was not used by the model when responding to queries. This issue is now resolved.
RHOAIENG-2869 - Cannot edit existing model framework and model path in a multi-model project
Previously, when you tried to edit a model in a multi-model project using the Deploy model dialog, the Model framework and Path values did not update. This issue is now resolved.
RHOAIENG-2724 - Model deployment fails because fields automatically reset in dialog
Previously, when you deployed a model or edited a deployed model, the Model servers and Model framework fields in the "Deploy model" dialog might have reset to the default state. The Deploy button might have remained enabled even though these mandatory fields no longer contained valid values. This issue is now resolved.
RHOAIENG-2099 - Data science pipeline server fails to deploy in fresh cluster
Previously, when you created a data science pipeline server on a fresh cluster, the user interface remained in a loading state and the pipeline server did not start. This issue is now resolved.
RHOAIENG-1199 (previously documented as ODH-DASHBOARD-1928) - Custom serving runtime creation error message is unhelpful
Previously, when you tried to create or edit a custom model-serving runtime and an error occurred, the error message did not indicate the cause of the error. The error messages have been improved.
RHOAIENG-556 - ServingRuntime for KServe model is created regardless of error
Previously, when you tried to deploy a KServe model and an error occurred, the InferenceService
custom resource (CR) was still created and the model was shown in the Data Science Projects page, but the status would always remain unknown. The KServe deploy process has been updated so that the ServingRuntime is not created if an error occurs.
RHOAIENG-548 (previously documented as ODH-DASHBOARD-1776) - Error messages when user does not have project administrator permission
Previously, if you did not have administrator permission for a project, you could not access some features, and the error messages did not explain why. For example, when you created a model server in an environment where you only had access to a single namespace, an Error creating model server
error message appeared. However, the model server is still successfully created. This issue is now resolved.
RHOAIENG-66 - Ray dashboard route deployed by CodeFlare SDK exposes self-signed certs instead of cluster cert
Previously, when you deployed a Ray cluster by using the CodeFlare SDK with the openshift_oauth=True
option, the resulting route for the Ray cluster was secured by using the passthrough
method and as a result, the self-signed certificate used by the OAuth proxy was exposed. This issue is now resolved.
RHOAIENG-12 - Cannot access Ray dashboard from some browsers
In some browsers, users of the distributed workloads feature might not have been able to access the Ray dashboard because the browser automatically changed the prefix of the dashboard URL from http
to https
. This issue is now resolved.
RHODS-6216 - The ModelMesh oauth-proxy container is intermittently unstable
Previously, ModelMesh pods did not deploy correctly due to a failure of the ModelMesh oauth-proxy
container. This issue occurred intermittently and only if authentication was enabled in the ModelMesh runtime environment. This issue is now resolved.
RHOAIENG-535 - Metrics graph showing HTTP requests for deployed models is incorrect if there are no HTTP requests
Previously, if a deployed model did not receive at least one HTTP request for each of the two data types (success and failed), the graphs that show HTTP request performance metrics (for all models on the model server or for the specific model) rendered incorrectly, with a straight line that indicated a steadily increasing number of failed requests. This issue is now resolved.
RHOAIENG-1467 - Serverless net-istio controller pod might hit OOM
Previously, the Knative net-istio-controller
pod (which is a dependency for KServe) might continuously crash due to an out-of-memory (OOM) error. This issue is now resolved.
RHOAIENG-1899 (previously documented as RHODS-6539) - The Anaconda Professional Edition cannot be validated and enabled
Previously, you could not enable the Anaconda Professional Edition because the dashboard’s key validation for it was inoperable. This issue is now resolved.
RHOAIENG-2269 - (Single-model) Dashboard fails to display the correct number of model replicas
Previously, on a single-model serving platform, the Models and model servers section of a data science project did not show the correct number of model replicas. This issue is now resolved.
RHOAIENG-2270 - (Single-model) Users cannot update model deployment settings
Previously, you couldn’t edit the deployment settings (for example, the number of replicas) of a model you deployed with a single-model serving platform. This issue is now resolved.
RHODS-8865 - A pipeline server fails to start unless you specify an Amazon Web Services (AWS) Simple Storage Service (S3) bucket resource
Previously, when you created a data connection for a data science project, the AWS_S3_BUCKET
field was not designated as a mandatory field. However, if you attempted to configure a pipeline server with a data connection where the AWS_S3_BUCKET
field was not populated, the pipeline server failed to start successfully. This issue is now resolved. The Configure pipeline server dialog has been updated to include the Bucket
field as a mandatory field.
RHODS-12899 - OpenVINO runtime missing annotation for NVIDIA GPUs
Previously, if a user selected the OpenVINO model server (supports GPUs) runtime and selected an NVIDIA GPU accelerator in the model server user interface, the system could display a unnecessary warning that the selected accelerator was not compatible with the selected runtime. The warning is no longer displayed.
RHOAIENG-84 - Cannot use self-signed certificates with KServe
Previously, the single-model serving platform did not support self-signed certificates. This issue is now resolved. To use self-signed certificates with KServe, follow the steps described in Working with certificates.
RHOAIENG-164 - Number of model server replicas for Kserve is not applied correctly from the dashboard
Previously, when you set a number of model server replicas different from the default (1), the model (server) was still deployed with 1 replica. This issue is now resolved.
RHOAIENG-288 - Recommended image version label for workbench is shown for two versions
Most of the workbench images that are available in OpenShift AI are provided in multiple versions. The only recommended version is the latest version. In Red Hat OpenShift AI 2.4 and 2.5, the Recommended tag was erroneously shown for multiple versions of an image. This issue is now resolved.
RHOAIENG-293 - Deprecated ModelMesh monitoring stack not deleted after upgrading from 2.4 to 2.5
In Red Hat OpenShift AI 2.5, the former ModelMesh monitoring stack was no longer deployed because it was replaced by user workload monitoring. However, the former monitoring stack was not deleted during an upgrade to OpenShift AI 2.5. Some components remained and used cluster resources. This issue is now resolved.
RHOAIENG-343 - Manual configuration of OpenShift Service Mesh and OpenShift Serverless does not work for KServe
If you installed OpenShift Serverless and OpenShift Service Mesh and then installed Red Hat OpenShift AI with KServe enabled, KServe was not deployed. This issue is now resolved.
RHOAIENG-517 - User with edit permissions cannot see created models
A user with edit permissions could not see any created models, unless they were the project owner or had admin permissions for the project. This issue is now resolved.
RHOAIENG-804 - Cannot deploy Large Language Models with KServe on FIPS-enabled clusters
Previously, Red Hat OpenShift AI was not yet fully designed for FIPS. You could not deploy Large Language Models (LLMs) with KServe on FIPS-enabled clusters. This issue is now resolved.
RHOAIENG-908 - Cannot use ModelMesh if KServe was previously enabled and then removed
Previously, when both ModelMesh and KServe were enabled in the DataScienceCluster
object, and you subsequently removed KServe, you could no longer deploy new models with ModelMesh. You could continue to use models that were previously deployed with ModelMesh. This issue is now resolved.
RHOAIENG-2184 - Cannot create Ray clusters or distributed workloads
Previously, users could not create Ray clusters or distributed workloads in namespaces where they have admin
or edit
permissions. This issue is now resolved.
ODH-DASHBOARD-1991 - ovms-gpu-ootb is missing recommended accelerator annotation
Previously, when you added a model server to your project, the Serving runtime list did not show the Recommended serving runtime label for the NVIDIA GPU. This issue is now resolved.
RHOAIENG-807 - Accelerator profile toleration removed when restarting a workbench
Previously, if you created a workbench that used an accelerator profile that in turn included a toleration, restarting the workbench removed the toleration information, which meant that the restart could not complete. A freshly created GPU-enabled workbench might start the first time, but never successfully restarted afterwards because the generated pod remained forever pending. This issue is now resolved.
DATA-SCIENCE-PIPELINES-OPERATOR-294 - Scheduled pipeline run that uses data-passing might fail to pass data between steps, or fail the step entirely
A scheduled pipeline run that uses an S3 object store to store the pipeline artifacts might fail with an error such as the following:
Bad value for --endpoint-url "cp": scheme is missing. Must be of the form http://<hostname>/ or https://<hostname>/
This issue occurred because the S3 object store endpoint was not successfully passed to the pods for the scheduled pipeline run. This issue is now resolved.
RHODS-4769 - GPUs on nodes with unsupported taints cannot be allocated to notebook servers
GPUs on nodes marked with any taint other than the supported nvidia.com/gpu taint could not be selected when creating a notebook server. This issue is now resolved.
RHODS-6346 - Unclear error message displays when using invalid characters to create a data science project
When creating a data science project’s data connection, workbench, or storage connection using invalid special characters, the following error message was displayed:
the object provided is unrecognized (must be of type Secret): couldn't get version/kind; json parse error: unexpected end of JSON input ({"apiVersion":"v1","kind":"Sec ...)
The error message failed to clearly indicate the problem. The error message now indicates that invalid characters were entered.
RHODS-6950 - Unable to scale down workbench GPUs when all GPUs in the cluster are being used
In earlier releases, it was not possible to scale down workbench GPUs if all GPUs in the cluster were being used. This issue applied to GPUs being used by one workbench, and GPUs being used by multiple workbenches. You can now scale down the GPUs by selecting None from the Accelerators list.
RHODS-8939 - Default shared memory for a Jupyter notebook created in a previous release causes a runtime error
Starting with release 1.31, this issue is resolved, and the shared memory for any new notebook is set to the size of the node.
For a Jupyter notebook created in a release earlier than 1.31, the default shared memory for a Jupyter notebook is set to 64 MB and you cannot change this default value in the notebook configuration.
To fix this issue, you must recreate the notebook or follow the process described in the Knowledgebase article How to change the shared memory for a Jupyter notebook in Red Hat OpenShift AI.
RHODS-9030 - Uninstall process for OpenShift AI might become stuck when removing kfdefs
resources
The steps for uninstalling the OpenShift AI managed service are described in Uninstalling OpenShift AI.
However, even when you followed this guide, you might have seen that the uninstall process did not finish successfully. Instead, the process stayed on the step of deleting kfdefs
resources that were used by the Kubeflow Operator. As shown in the following example, kfdefs
resources might exist in the redhat-ods-applications
, redhat-ods-monitoring
, and rhods-notebooks
namespaces:
$ oc get kfdefs.kfdef.apps.kubeflow.org -A NAMESPACE NAME AGE redhat-ods-applications rhods-anaconda 3h6m redhat-ods-applications rhods-dashboard 3h6m redhat-ods-applications rhods-data-science-pipelines-operator 3h6m redhat-ods-applications rhods-model-mesh 3h6m redhat-ods-applications rhods-nbc 3h6m redhat-ods-applications rhods-osd-config 3h6m redhat-ods-monitoring modelmesh-monitoring 3h6m redhat-ods-monitoring monitoring 3h6m rhods-notebooks rhods-notebooks 3h6m rhods-notebooks rhods-osd-config 3h5m
Failed removal of the kfdefs
resources might have also prevented later installation of a newer version of OpenShift AI. This issue no longer occurs.
RHODS-9764 - Data connection details get reset when editing a workbench
When you edited a workbench that had an existing data connection and then selected the Create new data connection option, the edit page might revert to the Use existing data connection option before you had finished specifying the new connection details. This issue is now resolved.
RHODS-9583 - Data Science dashboard did not detect an existing OpenShift Pipelines installation
When the OpenShift Pipelines Operator was installed as a global operator on your cluster, the OpenShift AI dashboard did not detect it. The OpenShift Pipelines Operator is now detected successfully.
ODH-DASHBOARD-1639 - Wrong TLS value in dashboard route
Previously, when a route was created for the OpenShift AI dashboard on OpenShift, the tls.termination
field had an invalid default value of Reencrypt
. This issue is now resolved. The new value is reencrypt
.
ODH-DASHBOARD-1638 - Name placeholder in Triggered Runs tab shows Scheduled run name
Previously, when you clicked Pipelines > Runs and then selected the Triggered tab to configure a triggered run, the example value shown in the Name field was Scheduled run name
. This issue is now resolved.
ODH-DASHBOARD-1547 - "We can’t find that page" message displayed in dashboard when pipeline operator installed in background
Previously, when you used the Data Science Pipelines page of the dashboard to install the OpenShift Pipelines Operator, when the Operator installation was complete, the page refreshed to show a We can't find that page
message. This issue is now resolved. When the Operator installation is complete, the dashboard redirects you to the Pipelines page, where you can create a pipeline server.
ODH-DASHBOARD-1545 - Dashboard keeps scrolling to bottom of project when Models tab is expanded
Previously, on the Data Science Projects page of the dashboard, if you clicked the Deployed models tab to expand it and then tried to perform other actions on the page, the page automatically scrolled back to the Deployed models section. This affected your ability to perform other actions. This issue is now resolved.
NOTEBOOKS-156 - Elyra included an example runtime called Test
Previously, Elyra included an example runtime configuration called Test
. If you selected this configuration when running a data science pipeline, you could see errors. The Test
configuration has now been removed.
RHODS-9622 - Duplicating a scheduled pipeline run does not copy the existing period and pipeline input parameter values
Previously, when you duplicated a scheduled pipeline run that had a periodic trigger, the duplication process did not copy the configured execution frequency for the recurring run or the specified pipeline input parameters. This issue is now resolved.
RHODS-8932 - Incorrect cron format was displayed by default when scheduling a recurring pipeline run
When you scheduled a recurring pipeline run by configuring a cron job, the OpenShift AI interface displayed an incorrect format by default. It now displays the correct format.
RHODS-9374 - Pipelines with non-unique names did not appear in the data science project user interface
If you launched a notebook from a Jupyter application that supported Elyra, or if you used a workbench, when you submitted a pipeline to be run, pipelines with non-unique names did not appear in the Pipelines section of the relevant data science project page or the Pipelines heading of the data science pipelines page. This issue has now been resolved.
RHODS-9329 - Deploying a custom model-serving runtime could result in an error message
Previously, if you used the OpenShift AI dashboard to deploy a custom model-serving runtime, the deployment process could fail with an Error retrieving Serving Runtime
message. This issue is now resolved.
RHODS-9064 - After upgrade, the Data Science Pipelines tab was not enabled on the OpenShift AI dashboard
When you upgraded from OpenShift AI 1.26 to OpenShift AI 1.28, the Data Science Pipelines tab was not enabled in the OpenShift AI dashboard. This issue is resolved in OpenShift AI 1.29.
RHODS-9443 - Exporting an Elyra pipeline exposed S3 storage credentials in plain text
In OpenShift AI 1.28.0, when you exported an Elyra pipeline from JupyterLab in Python DSL format or YAML format, the generated output contained S3 storage credentials in plain text. This issue has been resolved in OpenShift AI 1.28.1. However, after you upgrade to OpenShift AI 1.28.1, if your deployment contains a data science project with a pipeline server and a data connection, you must perform the following additional actions for the fix to take effect:
- Refresh your browser page.
- Stop any running workbenches in your deployment and restart them.
Furthermore, to confirm that your Elyra runtime configuration contains the fix, perform the following actions:
- In the left sidebar of JupyterLab, click Runtimes ( ).
Hover the cursor over the runtime configuration that you want to view and click the Edit button ( ).
The Data Science Pipelines runtime configuration page opens.
-
Confirm that
KUBERNETES_SECRET
is defined as the value in the Cloud Object Storage Authentication Type field. - Close the runtime configuration without changing it.
RHODS-8460 - When editing the details of a shared project, the user interface remained in a loading state without reporting an error
When a user with permission to edit a project attempted to edit its details, the user interface remained in a loading state and did not display an appropriate error message. Users with permission to edit projects cannot edit any fields in the project, such as its description. Those users can edit only components belonging to a project, such as its workbenches, data connections, and storage.
The user interface now displays an appropriate error message and does not try to update the project description.
RHODS-8482 - Data science pipeline graphs did not display node edges for running pipelines
If you ran pipelines that did not contain Tekton-formatted Parameters
or when
expressions in their YAML code, the OpenShift AI user interface did not display connecting edges to and from graph nodes. For example, if you used a pipeline containing the runAfter
property or Workspaces
, the user interface displayed the graph for the executed pipeline without edge connections. The OpenShift AI user interface now displays connecting edges to and from graph nodes.
RHODS-8923 - Newly created data connections were not detected when you attempted to create a pipeline server
If you created a data connection from within a Data Science project, and then attempted to create a pipeline server, the Configure a pipeline server dialog did not detect the data connection that you created. This issue is now resolved.
RHODS-8461 - When sharing a project with another user, the OpenShift AI user interface text was misleading
When you attempted to share a Data Science project with another user, the user interface text misleadingly implied that users could edit all of its details, such as its description. However, users can edit only components belonging to a project, such as its workbenches, data connections, and storage. This issue is now resolved and the user interface text no longer misleadingly implies that users can edit all of its details.
RHODS-8462 - Users with "Edit" permission could not create a Model Server
Users with "Edit" permissions can now create a Model Server without token authorization. Users must have "Admin" permissions to create a Model Server with token authorization.
RHODS-8796 - OpenVINO Model Server runtime did not have the required flag to force GPU usage
OpenShift AI includes the OpenVINO Model Server (OVMS) model-serving runtime by default. When you configured a new model server and chose this runtime, the Configure model server dialog enabled you to specify a number of GPUs to use with the model server. However, when you finished configuring the model server and deployed models from it, the model server did not actually use any GPUs. This issue is now resolved and the model server uses the GPUs.
RHODS-8861 - Changing the host project when creating a pipeline ran resulted in an inaccurate list of available pipelines
If you changed the host project while creating a pipeline run, the interface failed to make the pipelines of the new host project available. Instead, the interface showed pipelines that belong to the project you initially selected on the Data Science Pipelines > Runs page. This issue is now resolved. You no longer select a pipeline from the Create run page. The pipeline selection is automatically updated when you click the Create run button, based on the current project and its pipeline.
RHODS-8249 - Environment variables uploaded as ConfigMap were stored in Secret instead
Previously, in the OpenShift AI interface, when you added environment variables to a workbench by uploading a ConfigMap
configuration, the variables were stored in a Secret
object instead. This issue is now resolved.
RHODS-7975 - Workbenches could have multiple data connections
Previously, if you changed the data connection for a workbench, the existing data connection was not released. As a result, a workbench could stay connected to multiple data sources. This issue is now resolved.
RHODS-7948 - Uploading a secret file containing environment variables resulted in double-encoded values
Previously, when creating a workbench in a data science project, if you uploaded a YAML-based secret file containing environment variables, the environment variable values were not decoded. Then, in the resulting OpenShift secret created by this process, the encoded values were encoded again. This issue is now resolved.
RHODS-6429 - An error was displayed when creating a workbench with the Intel OpenVINO or Anaconda Professional Edition images
Previously, when you created a workbench with the Intel OpenVINO or Anaconda Professional Edition images, an error appeared during the creation process. However, the workbench was still successfully created. This issue is now resolved.
RHODS-6372 - Idle notebook culler did not take active terminals into account
Previously, if a notebook image had a running terminal, but no active, running kernels, the idle notebook culler detected the notebook as inactive and stopped the terminal. This issue is now resolved.
RHODS-5700 - Data connections could not be created or connected to when creating a workbench
When creating a workbench, users were unable to create a new data connection, or connect to existing data connections.
RHODS-6281 - OpenShift AI administrators could not access Settings page if an admin group was deleted from cluster
Previously, if a Red Hat OpenShift AI administrator group was deleted from the cluster, OpenShift AI administrator users could no longer access the Settings page on the OpenShift AI dashboard. In particular, the following behavior was seen:
- When an OpenShift AI administrator user tried to access the Settings → User management page, a "Page Not Found" error appeared.
-
Cluster administrators did not lose access to the Settings page on the OpenShift AI dashboard. When a cluster administrator accessed the Settings → User management page, a warning message appeared, indicating that the deleted OpenShift AI administrator group no longer existed in OpenShift. The deleted administrator group was then removed from
OdhDashboardConfig
, and administrator access was restored.
This issue is now resolved.
RHODS-1968 - Deleted users stayed logged in until dashboard was refreshed
Previously, when a user’s permissions for the Red Hat OpenShift AI dashboard were revoked, the user would notice the change only after a refresh of the dashboard page.
This issue is now resolved. When a user’s permissions are revoked, the OpenShift AI dashboard locks the user out within 30 seconds, without the need for a refresh.
RHODS-6384 - A workbench data connection was incorrectly updated when creating a duplicated data connection
When creating a data connection that contained the same name as an existing data connection, the data connection creation failed, but the associated workbench still restarted and connected to the wrong data connection. This issue has been resolved. Workbenches now connect to the correct data connection.
RHODS-6370 - Workbenches failed to receive the latest toleration
Previously, to acquire the latest toleration, users had to attempt to edit the relevant workbench, make no changes, and save the workbench again. Users can now apply the latest toleration change by stopping and then restarting their data science project’s workbench.
RHODS-6779 - Models failed to be served after upgrading from OpenShift AI 1.20 to OpenShift AI 1.21
When upgrading from OpenShift AI 1.20 to OpenShift AI 1.21, the modelmesh-serving
pod attempted to pull a non-existent image, causing an image pull error. As a result, models were unable to be served using the model serving feature in OpenShift AI. The odh-openvino-servingruntime-container-v1.21.0-15
image now deploys successfully.
RHODS-5945 - Anaconda Professional Edition could not be enabled in OpenShift AI
Anaconda Professional Edition could not be enabled for use in OpenShift AI. Instead, an InvalidImageName
error was displayed in the associated pod’s Events page. Anaconda Professional Edition can now be successfully enabled.
RHODS-5822 - Admin users were not warned when usage exceeded 90% and 100% for PVCs created by data science projects.
Warnings indicating when a PVC exceeded 90% and 100% of its capacity failed to display to admin users for PVCs created by data science projects. Admin users can now view warnings about when a PVC exceeds 90% and 100% of its capacity from the dashboard.
RHODS-5889 - Error message was not displayed if a data science notebook was stuck in "pending" status
If a notebook pod could not be created, the OpenShift AI interface did not show an error message. An error message is now displayed if a data science notebook cannot be spawned.
RHODS-5886 - Returning to the Hub Control Panel dashboard from the data science workbench failed
If you attempted to return to the dashboard from your workbench Jupyter notebook by clicking on File → Log Out, you were redirected to the dashboard and remained on a "Logging out" page. Likewise, if you attempted to return to the dashboard by clicking on File → Hub Control Panel, you were incorrectly redirected to the Start a notebook server page. Returning to the Hub Control Panel dashboard from the data science workbench now works as expected.
RHODS-6101 - Administrators were unable to stop all notebook servers
OpenShift AI administrators could not stop all notebook servers simultaneously. Administrators can now stop all notebook servers using the Stop all servers button and stop a single notebook by selecting Stop server from the action menu beside the relevant user.
RHODS-5891 - Workbench event log was not clearly visible
When creating a workbench, users could not easily locate the event log window in the OpenShift AI interface. The Starting label under the Status column is now underlined when you hover over it, indicating you can click on it to view the notebook status and the event log.
RHODS-6296 - ISV icons did not render when using a browser other than Google Chrome
When using a browser other than Google Chrome, not all ISV icons under Explore and Resources pages were rendered. ISV icons now display properly on all supported browsers.
RHODS-3182 - Incorrect number of available GPUs was displayed in Jupyter
When a user attempts to create a notebook instance in Jupyter, the maximum number of GPUs available for scheduling was not updated as GPUs are assigned. Jupyter now displays the correct number of GPUs available.
RHODS-5890 - When multiple persistent volumes were mounted to the same directory, workbenches failed to start
When mounting more than one persistent volume (PV) to the same mount folder in the same workbench, creation of the notebook pod failed and no errors were displayed to indicate there was an issue.
RHODS-5768 - Data science projects were not visible to users in Red Hat OpenShift AI
Removing the [DSP]
suffix at the end of a project’s Display Name property caused the associated data science project to no longer be visible. It is no longer possible for users to remove this suffix.
RHODS-5701 - Data connection configuration details were overwritten
When a data connection was added to a workbench, the configuration details for that data connection were saved in environment variables. When a second data connection was added, the configuration details are saved using the same environment variables, which meant the configuration for the first data connection was overwritten. At the moment, users can add a maximum of one data connection to each workbench.
RHODS-5252 - The notebook Administration page did not provide administrator access to a user’s notebook server
The notebook Administration page, accessed from the OpenShift AI dashboard, did not provide the means for an administrator to access a user’s notebook server. Administrators were restricted to only starting or stopping a user’s notebook server.
RHODS-2438 - PyTorch and TensorFlow images were unavailable when upgrading
When upgrading from OpenShift AI 1.3 to a later version, PyTorch and TensorFlow images were unavailable to users for approximately 30 minutes. As a result, users were unable to start PyTorch and TensorFlow notebooks in Jupyter during the upgrade process. This issue has now been resolved.
RHODS-5354 - Environment variable names were not validated when starting a notebook server
Environment variable names were not validated on the Start a notebook server page. If an invalid environment variable was added, users were unable to successfully start a notebook. The environmental variable name is now checked in real-time. If an invalid environment variable name is entered, an error message displays indicating valid environment variable names must consist of alphabetic characters, digits, _, -, or ., and must not start with a digit.
RHODS-4617 - The Number of GPUs drop-down was only visible if there were GPUs available
Previously, the Number of GPUs drop-down was only visible on the Start a notebook server page if GPU nodes were available. The Number of GPUs drop-down now also correctly displays if an autoscaling machine pool is defined in the cluster, even if no GPU nodes are currently available, possibly resulting in the provisioning of a new GPU node on the cluster.
RHODS-5420 - Cluster admin did not get administrator access if it was the only user present in the cluster
Previously, when the cluster admin was the only user present in the cluster, it did not get Red Hat OpenShift administrator access automatically. Administrator access is now correctly applied to the cluster admin user.
RHODS-4321 - Incorrect package version displayed during notebook selection
The Start a notebook server page displayed an incorrect version number (11.4 instead of 11.7) for the CUDA notebook image. The version of CUDA installed is no longer specified on this page.
RHODS-5001 - Admin users could add invalid tolerations to notebook pods
An admin user could add invalid tolerations on the Cluster settings page without triggering an error. If a invalid toleration was added, users were unable to successfully start notebooks. The toleration key is now checked in real-time. If an invalid toleration name is entered, an error message displays indicating valid toleration names consist of alphanumeric characters, -, _, or ., and must start and end with an alphanumeric character.
RHODS-5100 - Group role bindings were not applied to cluster administrators
Previously, if you had assigned cluster admin privileges to a group rather than a specific user, the dashboard failed to recognize administrative privileges for users in the administrative group. Group role bindings are now correctly applied to cluster administrators as expected.
RHODS-4947 - Old Minimal Python notebook image persisted after upgrade
After upgrading from OpenShift AI 1.14 to 1.15, the older version of the Minimal Python notebook persisted, including all associated package versions. The older version of the Minimal Python notebook no longer persists after upgrade.
RHODS-4935 - Excessive "missing x-forwarded-access-token header" error messages displayed in dashboard log
The rhods-dashboard
pod’s log contained an excessive number of "missing x-forwarded-access-token header" error messages due to a readiness probe hitting the /status
endpoint. This issue has now been resolved.
RHODS-2653 - Error occurred while fetching the generated images in the sample Pachyderm notebook
An error occurred when a user attempted to fetch an image using the sample Pachyderm notebook in Jupyter. The error stated that the image could not be found. Pachyderm has corrected this issue.
RHODS-4584 - Jupyter failed to start a notebook server using the OpenVINO notebook image
Jupyter’s Start a notebook server page failed to start a notebook server using the OpenVINO notebook image. Intel has provided an update to the OpenVINO operator to correct this issue.
RHODS-4923 - A non-standard check box displayed after disabling usage data collection
After disabling usage data collection on the Cluster settings page, when a user accessed another area of the OpenShift AI dashboard, and then returned to the Cluster settings page, the Allow collection of usage data check box had a non-standard style applied, and therefore did not look the same as other check boxes when selected or cleared.
RHODS-4938 - Incorrect headings were displayed in the Notebook Images page
The Notebook Images page, accessed from the Settings page on the OpenShift AI dashboard, displayed incorrect headings in the user interface. The Notebook image settings heading displayed as BYON image settings, and the Import Notebook images heading displayed as Import BYON images. The correct headings are now displayed as expected.
RHODS-4818 - Jupyter was unable to display images when the NVIDIA GPU add-on was installed
The Start a notebook server page did not display notebook images after installing the NVIDIA GPU add-on. Images are now correctly displayed, and can be started from the Start a notebook server page.
RHODS-4797 - PVC usage limit alerts were not sent when usage exceeded 90% and 100%
Alerts indicating when a PVC exceeded 90% and 100% of its capacity failed to be triggered and sent. These alerts are now triggered and sent as expected.
RHODS-4366 - Cluster settings were reset on operator restart
When the OpenShift AI operator pod was restarted, cluster settings were sometimes reset to their default values, removing any custom configuration. The OpenShift AI operator was restarted when a new version of OpenShift AI was released, and when the node that ran the operator failed. This issue occurred because the operator deployed ConfigMaps incorrectly. Operator deployment instructions have been updated so that this no longer occurs.
RHODS-4318 - The OpenVINO notebook image failed to build successfully
The OpenVINO notebook image failed to build successfully and displayed an error message. This issue has now been resolved.
RHODS-3743 - Starburst Galaxy quick start did not provide download link in the instruction steps
The Starburst Galaxy quick start, located on the Resources page on the dashboard, required the user to open the explore-data.ipynb notebook
, but failed to provide a link within the instruction steps. Instead, the link was provided in the quick start’s introduction.
RHODS-1974 - Changing alert notification emails required pod restart
Changes to the list of notification email addresses in the Red Hat OpenShift AI Add-On were not applied until after the rhods-operator
pod and the prometheus-*
pod were restarted.
RHODS-2738 - Red Hat OpenShift API Management 1.15.2 add-on installation did not successfully complete
For OpenShift AI installations that are integrated with the Red Hat OpenShift API Management 1.15.2 add-on, the Red Hat OpenShift API Management installation process did not successfully obtain the SMTP credentials secret. Subsequently, the installation did not complete.
RHODS-3237 - GPU tutorial did not appear on dashboard
The "GPU computing" tutorial, located at Gtc2018-numba, did not appear on the Resources page on the dashboard.
RHODS-3069 - GPU selection persisted when GPU nodes were unavailable
When a user provisioned a notebook server with GPU support, and the utilized GPU nodes were subsequently removed from the cluster, the user could not create a notebook server. This occurred because the most recently used setting for the number of attached GPUs was used by default.
RHODS-3181 - Pachyderm now compatible with OpenShift Dedicated 4.10 clusters
Pachyderm was not initially compatible with OpenShift Dedicated 4.10, and so was not available in OpenShift AI running on an OpenShift Dedicated 4.10 cluster. Pachyderm is now available on and compatible with OpenShift Dedicated 4.10.
RHODS-2160 - Uninstall process failed to complete when both OpenShift AI and OpenShift API Management were installed
When OpenShift AI and OpenShift API Management are installed together on the same cluster, they use the same Virtual Private Cluster (VPC). The uninstall process for these Add-ons attempts to delete the VPC. Previously, when both Add-ons are installed, the uninstall process for one service was blocked because the other service still had resources in the VPC. The cleanup process has been updated so that this conflict does not occur.
RHODS-2747 - Images were incorrectly updated after upgrading OpenShift AI
After the process to upgrade OpenShift AI completed, Jupyter failed to update its notebook images. This was due to an issue with the image caching mechanism. Images are now correctly updating after an upgrade.
RHODS-2425 - Incorrect TensorFlow and TensorBoard versions displayed during notebook selection
The Start a notebook server page displayed incorrect version numbers (2.4.0) for TensorFlow and TensorBoard in the TensorFlow notebook image. These versions have been corrected to TensorFlow 2.7.0 and TensorBoard 2.6.0.
RHODS-24339 - Quick start links did not display for enabled applications
For some applications, the Open quick start link failed to display on the application tile on the Enabled page. As a result, users did not have direct access to the quick start tour for the relevant application.
RHODS-2215 - Incorrect Python versions displayed during notebook selection
The Start a notebook server page displayed incorrect versions of Python for the TensorFlow and PyTorch notebook images. Additionally, the third integer of package version numbers is now no longer displayed.
RHODS-1977 - Ten minute wait after notebook server start fails
If the Jupyter leader pod failed while the notebook server was being started, the user could not access their notebook server until the pod restarted, which took approximately ten minutes. This process has been improved so that the user is redirected to their server when a new leader pod is elected. If this process times out, users see a 504 Gateway Timeout error, and can refresh to access their server.
Chapter 8. Known issues
This section describes known issues in Red Hat OpenShift AI and any known methods of working around these issues.
RHOAIENG-15123 (also documented as RHOAIENG-10790 and RHOAIENG-14265) - Pipelines schedule fail after upgrading
When you upgrade to OpenShift AI 2.15, data science pipeline scheduled runs that existed before the upgrade fail to execute. An error message appears in the task pod.
- Workaround
- Follow the steps in the How to resolve scheduled pipeline run failures after upgrading to Red Hat OpenShift AI 2.15.0 Knowledgebase solution.
RHOAIENG-15033 - Model registry instances do not restart or update after upgrading OpenShift AI
When you upgrade OpenShift AI, existing instances of the model registry component are not updated, which causes the instance pods to use older images than the ones referenced by the operator pod.
- Workaround
After upgrading OpenShift AI, for each existing model registry, create a new model registry instance that uses the same database configuration, and delete the old model registry instance. Your new model registry instance will contain all existing registered models and their metadata. You can perform these steps from the OpenShift AI dashboard or from the OpenShift command-line interface (CLI). From the OpenShift AI dashboard:
For each existing model registry, create a new instance that uses the same database configuration.
- From the OpenShift AI dashboard, click Settings → Model registry settings.
- Check the database connection details for an existing registry by clicking the action menu (⋮) beside the model registry on the Model registry settings page, and then clicking View database configuration.
- Create a new model registry with the same database connection details as the existing model registry instance.
- In OpenShift, verify that the REST and GRPC images defined in your model registry instance pod match the ones in the model registry operator pod.
After creating the new model registry, delete the original registry:
- On the Model registry settings page, click the action menu (⋮) beside the model registry, and then click Delete model registry.
- In the Delete model registry? dialog that appears, enter the name of the model registry in the text field to confirm that you intend to delete it.
- Click Delete model registry.
Alternatively, from the OpenShift command-line interface (CLI):
Run the following command, replacing
<model_registry_name>
with the name of your model registry:oc patch -n rhoai-model-registries mr <model_registry_name> --type=merge -p='{"spec":{"grpc":{"image":""},"rest":{"image":""}}}'
- In OpenShift, verify that the REST and GRPC images defined in your model registry instance pod match the ones in the model registry operator pod.
RHOAIENG-15008 - Error when creating a bias metric from the CLI without a request name
The user interface might display an error message when viewing bias metrics if the requestName
parameter is not set. If you use the user interface to view bias metrics, but want to configure them through the CLI, you must specify a requestName
parameter within your payload:
curl -sk -H "Authorization: Bearer ${TOKEN}" -X POST --location $TRUSTY_ROUTE/metrics/group/fairness/spd/request --header 'Content-Type: application/json' --data "{ \"modelId\": \"demo-loan-nn-onnx-alpha\", \"protectedAttribute\": \"Is Male-Identifying?\", \"privilegedAttribute\": 1.0, \"unprivilegedAttribute\": 0.0, \"outcomeName\": \"Will Default?\", \"favorableOutcome\": 0, \"batchSize\": 5000, \"requestName\": \"My Request\"
- Workaround
-
Use the CLI to delete metric requests that have no specified
requestName
and then refresh the UI.
RHOAIENG-14986 - Incorrect package path causes copy_demo_nbs
to fail
In OpenShift AI SDK version 0.22.0, the copy_demo_nbs()
function of the CodeFlare SDK fails because of an incorrect path to the SDK package. Running this function results in a FileNotFound
error.
- Workaround
There are two options to work around the issue:
-
Manually create a
demo-notebooks
directory. -
From a Python shell or a blank notebook, run the following Python script to create the
demo-notebooks
directory:
-
Manually create a
import codeflare_sdk import os import pathlib import shutil nb_dir = os.path.join(os.path.dirname(codeflare_sdk.__file__), "demo-notebooks") def copy_nbs(dir: str = "./demo-notebooks", overwrite: bool = False): # does dir exist already? if overwrite is False and pathlib.Path(dir).exists(): raise FileExistsError( f"Directory {dir} already exists. Please remove it or provide a different location." ) shutil.copytree(nb_dir, dir, dirs_exist_ok=True) copy_nbs()
RHOAIENG-14552 - Workbench or notebook OAuth proxy fails with FIPS on OCP 4.16
When using OpenShift 4.16 or newer in a FIPS-enabled cluster, connecting to a running workbench fails because the connection between the internal component oauth-proxy
and the OpenShift ingress fails with a TLS handshake error. When opening a workbench, the browser shows an "Application is not available" screen without any additional diagnostics.
- Workaround
- None
RHOAIENG-14271 - Compatibility errors occur when using different Python versions in Ray clusters with Jupyter notebooks
When you use Python version 3.11 in a Jupyter notebook, and then create a Ray cluster, the cluster defaults to a workbench image that contains Ray version 2.35 and Python version 3.9. This mismatch can cause compatibility errors, as the Ray Python client requires a Python version that matches your workbench configuration.
- Workaround
-
Use a workbench image with your Ray cluster that contains a matching Python version with the
ClusterConfiguration
argument.
RHOAIENG-14197 - Tooltip text for CPU and Memory graphs is clipped and therefore unreadable
When you hover the cursor over the CPU and Memory graphs in the Top resource-consuming distributed workloads section on the Project metrics tab of the Distributed Workloads Metrics page, the tooltip text is clipped, and therefore unreadable.
- Workaround
- View the project metrics for your distributed workloads within the CPU and Memory graphs themselves.
RHOAIENG-14095 - The dashboard is temporarily unavailable after the installing OpenShift AI Operator
After you install the OpenShift AI Operator, the OpenShift AI dashboard is unavailable for approximately three minutes. As a result, a Cannot read properties of undefined page might appear.
- Workaround
- After you first access the OpenShift AI dashboard, wait for a few minutes and then refresh the page in your browser.
RHOAIENG-13633 - Cannot set a serving platform for a project without first deploying a model from outside of the model registry
You cannot set a serving platform for a project without first deploying a model from outside of the model registry.
Currently, you cannot deploy a model from a model registry to a project unless the project already has single-model or multi-model serving selected. The only way to select single-model or multi-model serving from the OpenShift AI UI is to first deploy a model or model server from outside the registry.
- Workaround
Before deploying a model from a model registry to a new project, select a serving platform for the project from the OpenShift AI dashboard, or from the OpenShift console.
From the OpenShift AI dashboard:
- From the OpenShift AI dashboard, click Data Science Projects.
- Click the Models tab.
- Deploy a model or create a model server, and then delete it. For more information about models and model servers in OpenShift AI, see Serving models.
From the OpenShift console:
- In the OpenShift web console, select Home → Projects.
- Click the name of the project.
- Click the YAML tab.
In the
metadata.labels
section, add themodelmesh-enabled
label.-
To choose multi-model serving, set the
modelmesh-enabled
value totrue
. -
To choose single-model serving, set the
modelmesh-enabled
value tofalse
.
-
To choose multi-model serving, set the
- Click Save.
After you select a serving platform for a project, you can deploy models from a model registry to that project.
RHOAIENG-12233 - The chat_template
parameter is required when using the /v1/chat/completions
endpoint
When configuring the vLLM ServingRuntime for KServe runtime and querying a model using the /v1/chat/completions
endpoint, the model fails with the following 400 Bad Request
error:
As of transformers v4.44, default chat template is no longer allowed, so you must provide a chat template if the tokenizer does not define one
Transformers v4.44.2 is shipped with vLLM v0.5.5. As of vLLM v0.5.5, you must provide a chat template while querying a model using the /v1/chat/completions
endpoint.
- Workaround
If your model does not include a predefined chat template, you can use the
chat-template
command-line parameter to specify a chat template in your custom vLLM runtime, as shown in the example. Replace<CHAT_TEMPLATE>
with the path to your template.containers: - args: - --chat-template=<CHAT_TEMPLATE>
You can use the chat templates that are available as
.jinja
files in https://github.com/opendatahub-io/vllm/tree/main/examples or with the vLLM image under/apps/data/template
. For more information, see Chat templates.
RHOAIENG-11024 - Resources entries get wiped out after removing opendatahub.io/managed annotation
Manually removing the opendatahub.io/managed
annotation from any component deployment YAML file might cause resource
entry values in the file to be erased.
- Workaround
To remove the annotation from a deployment, use the following steps to delete the deployment.
The controller pod for the component will redeploy automatically with default values.
- Log in to the OpenShift console as a cluster administrator.
- In the Administrator perspective, click Workloads > Deployments.
-
From the Project drop-down list, select
redhat-ods-applications
. - Click the options menu beside the deployment for which you want to remove the annotation.
- Click Delete Deployment.
RHOAIENG-8294 - CodeFlare error when upgrading OpenShift AI 2.8 to version 2.10 or later
If you try to upgrade OpenShift AI 2.8 to version 2.10 or later, the following error message is shown for the CodeFlare component, due to a mismatch with the AppWrapper
custom resource definition (CRD) version.
ReconcileCompletedWithComponentErrors DataScienceCluster resource reconciled with component errors: 1 error occurred: * CustomResourceDefinition.apiextensions.k8s.io "appwrappers.workload.codeflare.dev" is invalid: status.storedVersions[0]: Invalid value: "v1beta1": must appear in spec.versions
- Workaround
Delete the existing
AppWrapper
CRD.$ oc delete crd appwrappers.workload.codeflare.dev
Install the latest version of the
AppWrapper
CRD.$ oc apply -f https://raw.githubusercontent.com/project-codeflare/codeflare-operator/main/config/crd/crd-appwrapper.yml
RHOAIENG-7947 - Model serving fails during query in KServe
If you initially install the ModelMesh component and enable the multi-model serving platform, but later install the KServe component and enable the single-model serving platform, inference requests to models deployed on the single-model serving platform might fail. In these cases, inference requests return a 404 - Not Found
error and the logs for the odh-model-controller
deployment object show a Reconciler
error message.
- Workaround
-
In OpenShift, restart the
odh-model-controller
deployment object.
RHOAIENG-7716 - Pipeline condition group status does not update
When you run a pipeline that has condition groups, for example, dsl.lf
, the UI displays a Running
status for the groups, even after the pipeline execution is complete.
- Workaround
You can confirm if a pipeline is still running by checking that no child tasks remain active.
- From the OpenShift AI dashboard, click Data Science Pipelines → Runs.
- From the Project drop-down menu, click your data science project.
- From the Runs tab, click the pipeline run that you want to check the status of.
Expand the condition group and click a child task.
A panel that contains information about the child task is displayed
On the panel, click the Task details tab.
The Status field displays the correct status for the child task.
RHOAIENG-6486 - Pod labels, annotations, and tolerations cannot be configured when using the Elyra JupyterLab extension with the TensorFlow 2024.1 notebook image
When using the Elyra JupyterLab extension with the TensorFlow 2024.1 notebook image, you cannot configure pod labels, annotations, or tolerations from an executed pipeline. This is due to a dependency conflict with the kfp and tf2onnx packages.
- Workaround
If you are working with the TensorFlow 2024.1 notebook image, after you have completed your work, change the assigned workbench notebook image to the Standard Data Science 2024.1 notebook image.
In the Pipeline properties tab in the Elyra JupyterLab extension, set the Tensorflow runtime image as the default runtime image for the pipeline node individually, along with the relevant pod label, annotation or toleration, for each pipeline node.
RHOAIENG-6435 - Distributed workloads resources are not included in Project metrics
When you click Distributed Workloads Metrics > Project metrics and view the Requested resources section, the Requested by all projects value currently excludes the resources for distributed workloads that have not yet been admitted to the queue.
- Workaround
- None.
RHOAIENG-6409 - Cannot save parameter
errors appear in pipeline logs for successful runs
When you run a pipeline more than once with data science pipelines 2.0, Cannot save parameter
errors appear in the pipeline logs for successful pipeline runs. You can safely ignore these errors.
- Workaround
- None.
RHOAIENG-12294 (previously documented as RHOAIENG-4812) - Distributed workload metrics exclude GPU metrics
In this release of OpenShift AI, the distributed workload metrics exclude GPU metrics.
- Workaround
- None.
RHOAIENG-4570 - Existing Argo Workflows installation conflicts with install or upgrade
Data science pipelines 2.0 contains an installation of Argo Workflows. OpenShift AI does not support direct customer usage of this installation of Argo Workflows. To install or upgrade OpenShift AI with data science pipelines 2.0, ensure that there is no existing installation of Argo Workflows on your cluster. For more information, see Enabling data science pipelines 2.0.
- Workaround
-
Remove the existing Argo Workflows installation or set
datasciencepipelines
toRemoved
, and then proceed with the installation or upgrade.
RHOAIENG-3913 - Red Hat OpenShift AI Operator incorrectly shows Degraded
condition of False
with an error
If you have enabled the KServe component in the DataScienceCluster (DSC) object used by the OpenShift AI Operator, but have not installed the dependent Red Hat OpenShift Service Mesh and Red Hat OpenShift Serverless Operators, the kserveReady
condition in the DSC object correctly shows that KServe is not ready. However, the Degraded
condition incorrectly shows a value of False
.
- Workaround
- Install the Red Hat OpenShift Serverless and Red Hat OpenShift Service Mesh Operators, and then recreate the DSC.
RHOAIENG-3025 - OVMS expected directory layout conflicts with the KServe StoragePuller layout
When you use the OpenVINO Model Server (OVMS) runtime to deploy a model on the single-model serving platform (which uses KServe), there is a mismatch between the directory layout expected by OVMS and that of the model-pulling logic used by KServe. Specifically, OVMS requires the model files to be in the /<mnt>/models/1/
directory, while KServe places them in the /<mnt>/models/
directory.
- Workaround
Perform the following actions:
-
In your S3-compatible storage bucket, place your model files in a directory called
1/
, for example,/<s3_storage_bucket>/models/1/<model_files>
. To use the OVMS runtime to deploy a model on the single-model serving platform, choose one of the following options to specify the path to your model files:
-
If you are using the OpenShift AI dashboard to deploy your model, in the Path field for your data connection, use the
/<s3_storage_bucket>/models/
format to specify the path to your model files. Do not specify the1/
directory as part of the path. -
If you are creating your own
InferenceService
custom resource to deploy your model, configure the value of thestorageURI
field as/<s3_storage_bucket>/models/
. Do not specify the1/
directory as part of the path.
-
If you are using the OpenShift AI dashboard to deploy your model, in the Path field for your data connection, use the
-
In your S3-compatible storage bucket, place your model files in a directory called
KServe pulls model files from the subdirectory in the path that you specified. In this case, KServe correctly pulls model files from the /<s3_storage_bucket>/models/1/
directory in your S3-compatible storage.
RHOAIENG-3018 - OVMS on KServe does not expose the correct endpoint in the dashboard
When you use the OpenVINO Model Server (OVMS) runtime to deploy a model on the single-model serving platform, the URL shown in the Inference endpoint field for the deployed model is not complete.
- Workaround
-
To send queries to the model, you must add the
/v2/models/_<model-name>_/infer
string to the end of the URL. Replace_<model-name>_
with the name of your deployed model.
RHOAIENG-2759 - Model deployment fails when both secured and regular model servers are present in a project
When you create a second model server in a project where one server is using token authentication, and the other server does not use authentication, the deployment of the second model might fail to start.
- Workaround
- None.
RHOAIENG-2602 - “Average response time" server metric graph shows multiple lines due to ModelMesh pod restart
The Average response time server metric graph shows multiple lines if the ModelMesh pod is restarted.
- Workaround
- None.
RHOAIENG-2585 - UI does not display an error/warning when UWM is not enabled in the cluster
Red Hat OpenShift AI does not correctly warn users if User Workload Monitoring (UWM) is disabled in the cluster. UWM is necessary for the correct functionality of model metrics.
- Workaround
- Manually ensure that UWM is enabled in your cluster, as described in Enabling monitoring for user-defined projects.
RHOAIENG-2555 - Model framework selector does not reset when changing Serving Runtime in form
When you use the Deploy model dialog to deploy a model on the single-model serving platform, if you select a runtime and a supported framework, but then switch to a different runtime, the existing framework selection is not reset. This means that it is possible to deploy the model with a framework that is not supported for the selected runtime.
- Workaround
- While deploying a model, if you change your selected runtime, click the Select a framework list again and select a supported framework.
RHOAIENG-2468 - Services in the same project as KServe might become inaccessible in OpenShift
If you deploy a non-OpenShift AI service in a data science project that contains models deployed on the single-model serving platform (which uses KServe), the accessibility of the service might be affected by the network configuration of your OpenShift cluster. This is particularly likely if you are using the OVN-Kubernetes network plugin in combination with host network namespaces.
- Workaround
Perform one of the following actions:
- Deploy the service in another data science project that does not contain models deployed on the single-model serving platform. Or, deploy the service in another OpenShift project.
In the data science project where the service is, add a network policy to accept ingress traffic to your application pods, as shown in the following example:
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-ingress-to-myapp spec: podSelector: matchLabels: app: myapp ingress: - {}
RHOAIENG-2228 - The performance metrics graph changes constantly when the interval is set to 15 seconds
On the Endpoint performance tab of the model metrics screen, if you set the Refresh interval to 15 seconds and the Time range to 1 hour, the graph results change continuously.
- Workaround
- None.
RHOAIENG-2183 - Endpoint performance graphs might show incorrect labels
In the Endpoint performance tab of the model metrics screen, the graph tooltip might show incorrect labels.
- Workaround
- None.
RHOAIENG-1919 - Model Serving page fails to fetch or report the model route URL soon after its deployment
When deploying a model from the OpenShift AI dashboard, the system displays the following warning message while the Status column of your model indicates success with an OK/green checkmark.
Failed to get endpoint for this deployed model. routes.rout.openshift.io"<model_name>" not found
- Workaround
- Refresh your browser page.
RHOAIENG-404 - No Components Found page randomly appears instead of Enabled page in OpenShift AI dashboard
A No Components Found page might appear when you access the Red Hat OpenShift AI dashboard.
- Workaround
- Refresh the browser page.
RHOAIENG-1128 - Unclear error message displays when attempting to increase the size of a Persistent Volume (PV) that is not connected to a workbench
When attempting to increase the size of a Persistent Volume (PV) that is not connected to a workbench, an unclear error message is displayed.
- Workaround
- Verify that your PV is connected to a workbench before attempting to increase the size.
RHOAIENG-545 - Cannot specify a generic default node runtime image in JupyterLab pipeline editor
When you edit an Elyra pipeline in the JupyterLab IDE pipeline editor, and you click the PIPELINE PROPERTIES tab, and scroll to the Generic Node Defaults section and edit the Runtime Image field, your changes are not saved.
- Workaround
- Define the required runtime image explicitly for each node. Click the NODE PROPERTIES tab, and specify the required image in the Runtime Image field.
RHOAIENG-497 - Removing DSCI Results In OpenShift Service Mesh CR Being Deleted Without User Notification
If you delete the DSCInitialization
resource, the OpenShift Service Mesh CR is also deleted. A warning message is not shown.
- Workaround
- None.
RHOAIENG-282 - Workload should not be dispatched if required resources are not available
Sometimes a workload is dispatched even though a single machine instance does not have sufficient resources to provision the RayCluster successfully. The AppWrapper
CRD remains in a Running
state and related pods are stuck in a Pending
state indefinitely.
- Workaround
- Add extra resources to the cluster.
RHOAIENG-131 - gRPC endpoint not responding properly after the InferenceService reports as Loaded
When numerous InferenceService
instances are generated and directed requests, Service Mesh Control Plane (SMCP) becomes unresponsive. The status of the InferenceService
instance is Loaded
, but the call to the gRPC endpoint returns with errors.
- Workaround
-
Edit the
ServiceMeshControlPlane
custom resource (CR) to increase the memory limit of the Istio egress and ingress pods.
RHOAIENG-130 - Synchronization issue when the model is just launched
When the status of the KServe container is Ready
, a request is accepted even though the TGIS container is not ready.
- Workaround
- Wait a few seconds to ensure that all initialization has completed and the TGIS container is actually ready, and then review the request output.
RHOAIENG-3115 - Model cannot be queried for a few seconds after it is shown as ready
Models deployed using the multi-model serving platform might be unresponsive to queries despite appearing as Ready in the dashboard. You might see an “Application is not available" response when querying the model endpoint.
- Workaround
- Wait 30-40 seconds and then refresh the page in your browser.
RHOAIENG-1619 (previously documented as DATA-SCIENCE-PIPELINES-165) - Poor error message when S3 bucket is not writable
When you set up a data connection and the S3 bucket is not writable, and you try to upload a pipeline, the error message Failed to store pipelines
is not helpful.
- Workaround
- Verify that your data connection credentials are correct and that you have write access to the bucket you specified.
RHOAIENG-1207 (previously documented as ODH-DASHBOARD-1758) - Error duplicating OOTB custom serving runtimes several times
If you duplicate a model-serving runtime several times, the duplication fails with the Serving runtime name "<name>" already exists
error message.
- Workaround
-
Change the
metadata.name
field to a unique value.
RHOAIENG-1201 (previously documented as ODH-DASHBOARD-1908) - Cannot create workbench with an empty environment variable
When creating a workbench, if you click Add variable but do not select an environment variable type from the list, you cannot create the workbench. The field is not marked as required, and no error message is shown.
- Workaround
- None.
RHOAIENG-432 (previously documented as RHODS-12928) - Using unsupported characters can generate Kubernetes resource names with multiple dashes
When you create a resource and you specify unsupported characters in the name, then each space is replaced with a dash and other unsupported characters are removed, which can result in an invalid resource name.
- Workaround
- None.
RHOAIENG-226 (previously documented as RHODS-12432) - Deletion of the notebook-culler ConfigMap causes Permission Denied on dashboard
If you delete the notebook-controller-culler-config
ConfigMap in the redhat-ods-applications
namespace, you can no longer save changes to the Cluster Settings page on the OpenShift AI dashboard. The save operation fails with an HTTP request has failed
error.
- Workaround
Complete the following steps as a user with
cluster-admin
permissions:-
Log in to your cluster by using the
oc
client. Enter the following command to update the
OdhDashboardConfig
custom resource in theredhat-ods-applications
application namespace:$ oc patch OdhDashboardConfig odh-dashboard-config -n redhat-ods-applications --type=merge -p '{"spec": {"dashboardConfig": {"notebookController.enabled": true}}}'
-
Log in to your cluster by using the
RHOAIENG-133 - Existing workbench cannot run Elyra pipeline after notebook restart
If you use the Elyra JupyterLab extension to create and run data science pipelines within JupyterLab, and you configure the pipeline server after you created a workbench and specified a notebook image within the workbench, you cannot execute the pipeline, even after restarting the notebook.
- Workaround
- Stop the running notebook.
- Edit the workbench to make a small modification. For example, add a new dummy environment variable, or delete an existing unnecessary environment variable. Save your changes.
- Restart the notebook.
- In the left sidebar of JupyterLab, click Runtimes.
- Confirm that the default runtime is selected.
RHOAIENG-11 - Separately installed instance of CodeFlare Operator not supported
In Red Hat OpenShift AI, the CodeFlare Operator is included in the base product and not in a separate Operator. Separately installed instances of the CodeFlare Operator from Red Hat or the community are not supported.
- Workaround
- Delete any installed CodeFlare Operators, and install and configure Red Hat OpenShift AI, as described in the Red Hat Knowledgebase solution How to migrate from a separately installed CodeFlare Operator in your data science cluster.
RHODS-12798 - Pods fail with "unable to init seccomp" error
Pods fail with CreateContainerError
status or Pending
status instead of Running
status, because of a known kernel bug that introduced a seccomp
memory leak. When you check the events on the namespace where the pod is failing, or run the oc describe pod
command, the following error appears:
runc create failed: unable to start container process: unable to init seccomp: error loading seccomp filter into kernel: error loading seccomp filter: errno 524
- Workaround
-
Increase the value of
net.core.bpf_jit_limit
as described in the Red Hat Knowledgebase solution Pods failing with error loading seccomp filter into kernel: errno 524 in OpenShift 4.
KUBEFLOW-177 - Bearer token from application not forwarded by OAuth-proxy
You cannot use an application as a custom workbench image if its internal authentication mechanism is based on a bearer token. The OAuth-proxy configuration removes the bearer token from the headers, and the application cannot work properly.
- Workaround
- None.
NOTEBOOKS-210 - A notebook fails to export as a PDF file in Jupyter
When you export a notebook as a PDF file in Jupyter, the export process fails with an error.
- Workaround
- None.
RHOAIENG-1210 (previously documented as ODH-DASHBOARD-1699) - Workbench does not automatically restart for all configuration changes
When you edit the configuration settings of a workbench, a warning message appears stating that the workbench will restart if you make any changes to its configuration settings. This warning is misleading because in the following cases, the workbench does not automatically restart:
- Edit name
- Edit description
- Edit, add, or remove keys and values of existing environment variables
- Workaround
- Manually restart the workbench.
RHOAIENG-1208 (previously documented as ODH-DASHBOARD-1741) - Cannot create a workbench whose name begins with a number
If you try to create a workbench whose name begins with a number, the workbench does not start.
- Workaround
- Delete the workbench and create a new one with a name that begins with a letter.
KUBEFLOW-157 - Logging out of JupyterLab does not work if you are already logged out of the OpenShift AI dashboard
If you log out of the OpenShift AI dashboard before you log out of JupyterLab, then logging out of JupyterLab is not successful. For example, if you know the URL for a Jupyter notebook, you are able to open this again in your browser.
- Workaround
- Log out of JupyterLab before you log out of the OpenShift AI dashboard.
RHODS-9789 - Pipeline servers fail to start if they contain a custom database that includes a dash in its database name or username field
When you create a pipeline server that uses a custom database, if the value that you set for the dbname field or username field includes a dash, the pipeline server fails to start.
- Workaround
- Edit the pipeline server to omit the dash from the affected fields.
RHOAIENG-580 (previously documented as RHODS-9412) - Elyra pipeline fails to run if workbench is created by a user with edit permissions
If a user who has been granted edit permissions for a project creates a project workbench, that user sees the following behavior:
-
During the workbench creation process, the user sees an
Error creating workbench
message related to the creation of Kubernetes role bindings. - Despite the preceding error message, OpenShift AI still creates the workbench. However, the error message means that the user will not be able to use the workbench to run Elyra data science pipelines.
If the user tries to use the workbench to run an Elyra pipeline, Jupyter shows an
Error making request
message that describes failed initialization.- Workaround
- A user with administrator permissions (for example, the project owner) must create the workbench on behalf of the user with edit permissions. That user can then use the workbench to run Elyra pipelines.
RHODS-7718 - User without dashboard permissions is able to continue using their running notebooks and workbenches indefinitely
When a Red Hat OpenShift AI administrator revokes a user’s permissions, the user can continue to use their running notebooks and workbenches indefinitely.
- Workaround
- When the OpenShift AI administrator revokes a user’s permissions, the administrator should also stop any running notebooks and workbenches for that user.
RHOAIENG-1157 (previously documented as RHODS-6955) - An error can occur when trying to edit a workbench
When editing a workbench, an error similar to the following can occur:
Error creating workbench Operation cannot be fulfilled on notebooks.kubeflow.org "workbench-name": the object has been modified; please apply your changes to the latest version and try again
- Workaround
- None.
RHOAIENG-1152 (previously documented as RHODS-6356) - The notebook creation process fails for users who have never logged in to the dashboard
The dashboard’s notebook Administration page displays users belonging to the user group and admin group in OpenShift. However, if an administrator attempts to start a notebook server on behalf of a user who has never logged in to the dashboard, the server creation process fails and displays the following error message:
Request invalid against a username that does not exist.
- Workaround
- Request that the relevant user logs into the dashboard.
RHODS-5763 - Incorrect package version displayed during notebook selection
The Start a notebook server page displays an incorrect version number for the Anaconda notebook image.
- Workaround
- None.
RHODS-5543 - When using the NVIDIA GPU Operator, more nodes than needed are created by the Node Autoscaler
When a pod cannot be scheduled due to insufficient available resources, the Node Autoscaler creates a new node. There is a delay until the newly created node receives the relevant GPU workload. Consequently, the pod cannot be scheduled and the Node Autoscaler’s continuously creates additional new nodes until one of the nodes is ready to receive the GPU workload. For more information about this issue, see the Red Hat Knowledgebase solution When using the NVIDIA GPU Operator, more nodes than needed are created by the Node Autoscaler.
- Workaround
-
Apply the
cluster-api/accelerator
label inmachineset.spec.template.spec.metadata
. This causes the autoscaler to consider those nodes as unready until the GPU driver has been deployed.
RHOAIENG-1137 (previously documented as RHODS-5251) - Notebook server administration page shows users who have lost permission access
If a user who previously started a notebook server in Jupyter loses their permissions to do so (for example, if an OpenShift AI administrator changes the user’s group settings or removes the user from a permitted group), administrators continue to see the user’s notebook servers on the server Administration page. As a consequence, an administrator is able to restart notebook servers that belong to the user whose permissions were revoked.
- Workaround
- None.
RHODS-4799 - Tensorboard requires manual steps to view
When a user has TensorFlow or PyTorch notebook images and wants to use TensorBoard to display data, manual steps are necessary to include environment variables in the notebook environment, and to import those variables for use in your code.
- Workaround
When you start your notebook server, use the following code to set the value for the TENSORBOARD_PROXY_URL environment variable to use your OpenShift AI user ID.
import os os.environ["TENSORBOARD_PROXY_URL"]= os.environ["NB_PREFIX"]+"/proxy/6006/"
RHODS-4718 - The Intel® oneAPI AI Analytics Toolkits quick start references nonexistent sample notebooks
The Intel® oneAPI AI Analytics Toolkits quick start, located on the Resources page on the dashboard, requires the user to load sample notebooks as part of the instruction steps, but refers to notebooks that do not exist in the associated repository.
- Workaround
- None.
RHODS-4627 - The CronJob responsible for validating Anaconda Professional Edition’s license is suspended and does not run daily
The CronJob responsible for validating Anaconda Professional Edition’s license is automatically suspended by the OpenShift AI operator. As a result, the CronJob does not run daily as scheduled. In addition, when Anaconda Professional Edition’s license expires, Anaconda Professional Edition is not indicated as disabled on the OpenShift AI dashboard.
- Workaround
- None.
RHOAIENG-1141 (previously documented as RHODS-4502) - The NVIDIA GPU Operator tile on the dashboard displays button unnecessarily
GPUs are automatically available in Jupyter after the NVIDIA GPU Operator is installed. The Enable button, located on the NVIDIA GPU Operator tile on the Explore page, is therefore redundant. In addition, clicking the Enable button moves the NVIDIA GPU Operator tile to the Enabled page, even if the Operator is not installed.
- Workaround
- None.
RHOAIENG-1135 (previously documented as RHODS-3985) - Dashboard does not display Enabled page content after ISV operator uninstall
After an ISV operator is uninstalled, no content is displayed on the Enabled page on the dashboard. Instead, the following error is displayed:
Error loading components HTTP request failed
- Workaround
- Wait 30-40 seconds and then refresh the page in your browser.
RHODS-3984 - Incorrect package versions displayed during notebook selection
In the OpenShift AI interface, the Start a notebook server page displays incorrect version numbers for the JupyterLab and Notebook packages included in the oneAPI AI Analytics Toolkit notebook image. The page might also show an incorrect value for the Python version used by this image.
- Workaround
-
When you start your oneAPI AI Analytics Toolkit notebook server, you can check which Python packages are installed on your notebook server and which version of the package you have by running the
!pip list
command in a notebook cell.
RHODS-2956 - Error can occur when creating a notebook instance
When creating a notebook instance in Jupyter, a Directory not found
error appears intermittently. This error message can be ignored by clicking Dismiss.
- Workaround
- None.
RHOAING-1147 (previously documented as RHODS-2881) - Actions on dashboard not clearly visible
The dashboard actions to revalidate a disabled application license and to remove a disabled application tile are not clearly visible to the user. These actions appear when the user clicks on the application tile’s Disabled
label. As a result, the intended workflows might not be clear to the user.
- Workaround
- None.
RHOAIENG-1134 (previously documented as RHODS-2879) - License revalidation action appears unnecessarily
The dashboard action to revalidate a disabled application license appears unnecessarily for applications that do not have a license validation or activation system. In addition, when a user attempts to revalidate a license that cannot be revalidated, feedback is not displayed to state why the action cannot be completed.
- Workaround
- None.
RHOAIENG-2305 (previously documented as RHODS-2650) - Error can occur during Pachyderm deployment
When creating an instance of the Pachyderm operator, a webhook error appears intermittently, preventing the creation process from starting successfully. The webhook error is indicative that, either the Pachyderm operator failed a health check, causing it to restart, or that the operator process exceeded its container’s allocated memory limit, triggering an Out of Memory (OOM) kill.
- Workaround
- Repeat the Pachyderm instance creation process until the error no longer appears.
RHODS-2096 - IBM Watson Studio not available in OpenShift AI
IBM Watson Studio is not available when OpenShift AI is installed on OpenShift Dedicated 4.9 or higher, because it is not compatible with these versions of OpenShift Dedicated.
- Workaround
- Contact Marketplace support for assistance manually configuring Watson Studio on OpenShift Dedicated 4.9 and higher.
RHODS-1888 - OpenShift AI hyperlink still visible after uninstall
When the OpenShift AI Add-on is uninstalled from an OpenShift Dedicated cluster, the link to the OpenShift AI interface remains visible in the application launcher menu. Clicking this link results in a "Page Not Found" error because OpenShift AI is no longer available.
- Workaround
- None.
Chapter 9. Product features
Red Hat OpenShift AI provides a rich set of features for data scientists and IT operations administrators. To learn more, see Introduction to Red Hat OpenShift AI.