Release notes
Features, enhancements, resolved issues, and known issues associated with this release
Abstract
Chapter 1. Overview of OpenShift AI Copy linkLink copied to clipboard!
Red Hat OpenShift AI is a platform for data scientists and developers of artificial intelligence and machine learning (AI/ML) applications.
OpenShift AI provides an environment to develop, train, serve, test, and monitor AI/ML models and applications on-premise or in the cloud.
For data scientists, OpenShift AI includes Jupyter and a collection of default workbench images optimized with the tools and libraries required for model development, and the TensorFlow and PyTorch frameworks. Deploy and host your models, integrate models into external applications, and export models to host them in any hybrid cloud environment. You can enhance your data science projects on OpenShift AI by building portable machine learning (ML) workflows with data science pipelines, using Docker containers. You can also accelerate your data science experiments through the use of graphics processing units (GPUs) and Intel Gaudi AI accelerators.
For administrators, OpenShift AI enables data science workloads in an existing Red Hat OpenShift or ROSA environment. Manage users with your existing OpenShift identity provider, and manage the resources available to workbenches to ensure data scientists have what they require to create, train, and host models. Use accelerators to reduce costs and allow your data scientists to enhance the performance of their end-to-end data science workflows using graphics processing units (GPUs) and Intel Gaudi AI accelerators.
OpenShift AI has two deployment options:
- Self-managed software that you can install on-premise or in the cloud. You can install OpenShift AI Self-Managed in a self-managed environment such as OpenShift Container Platform, or in Red Hat-managed cloud environments such as Red Hat OpenShift Dedicated (with a Customer Cloud Subscription for AWS or GCP), Red Hat OpenShift Service on Amazon Web Services (ROSA classic or ROSA HCP), or Microsoft Azure Red Hat OpenShift.
A managed cloud service, installed as an add-on in Red Hat OpenShift Dedicated (with a Customer Cloud Subscription for AWS or GCP) or in Red Hat OpenShift Service on Amazon Web Services (ROSA classic).
For information about OpenShift AI Cloud Service, see Product Documentation for Red Hat OpenShift AI.
For information about OpenShift AI supported software platforms, components, and dependencies, see the Red Hat OpenShift AI: Supported Configurations Knowledgebase article.
For a detailed view of the 2.24 release lifecycle, including the full support phase window, see the Red Hat OpenShift AI Self-Managed Life Cycle Knowledgebase article.
Chapter 2. New features and enhancements Copy linkLink copied to clipboard!
This section describes new features and enhancements in Red Hat OpenShift AI 2.24.
2.1. New features Copy linkLink copied to clipboard!
- Support added to TrustyAI for KServe InferenceLogger integration
TrustyAI now provides support for KServe inference deployments through automatic InferenceLogger configuration.
Both KServe Raw and Serverless are supported and deployment mode is automatically detected using the
InferenceService
annotations.
- Enhanced workload scheduling with Kueue
OpenShift AI now provides enhanced workload scheduling with the Red Hat build of Kueue. Kueue is a job-queuing system that provides resource-aware scheduling for workloads, improving GPU utilization and ensuring fair resource sharing with intelligent, quota-based scheduling across AI workloads.
This feature expands Kueue’s workload support in OpenShift AI to include workbenches (
Notebook
) and model serving (InferenceService
), in addition to the previously supported distributed training jobs (RayJob
,RayCluster
,PyTorchJob
).A validating webhook now handles queue enforcement. This webhook ensures that in any project enabled for Kueue management (with the
kueue.openshift.io/managed=true
label), all supported workloads must specify a targetLocalQueue
(with thekueue.x-k8s.io/queue-name
label). This replaces the Validating Admission Policy used in previous versions.For more information, see Managing workloads with Kueue.
- Support added to view git commit hash in an image
- You can now view the git commit hash in an image. This feature allows you to determine if the image has changed, even if the version number stays the same. You can also trace the image back to the source code if needed.
- Support added for data science pipelines with Elyra
- When using data science pipelines with Elyra, you now have the option to use a service-based URL rather than a route-based URL. Your data science pipeline can be used from the service directly by including the port number.
- Workbench images mirrored by default
The latest version of workbench images is mirrored by default when you mirror images to a private registry for a disconnected installation. As an administrator, you can mirror older versions of workbench images through the
additionalImages
field in the Disconnected Helper configuration.ImportantOnly the latest version of workbench images is supported with bug fixes and security updates. Older versions of workbench images are available, but they do not receive bug fixes or security updates.
2.2. Enhancements Copy linkLink copied to clipboard!
- Support for serving models from a Persistent Volume Claim (PVC)
- Red Hat OpenShift AI now supports serving models directly from existing cluster storage. With this feature, you can serve models from pre-existing persistent volume claim (PVC) locations and create new PVCs for model storage within the interface.
- New option to disable caching for all pipelines in a project
You can now disable caching for all data science pipelines in the pipeline server, which overrides all pipeline and task-level caching settings. This global setting is useful for scenarios such as debugging, development, or cases that require deterministic re-execution.
This option is configurable with the Allow caching to be configured per pipeline and task checkbox when you create or edit a pipeline server. Cluster administrators can also configure this
spec.apiServer.cacheEnabled
option. By default, this field is set to true. To disable caching cluster-wide, set this field to false. For more information, see Overview of data science pipelines caching.
- Migration of production images from Quay to Red Hat Registry
- RHOAI production images that fall under the current support model have been migrated from Quay to Red Hat Registry. They will continue to receive updates as defined by their release channel. Previously released images will remain in Quay.
- JupyterLab version updated
- JupyterLab is updated from version 4.2 to 4.4. This update includes a Move to Trash dropdown option when you right click on a folder, as well as other bug fixes and enhancements.
- Updated vLLM component versions
OpenShift AI supports the following vLLM versions for each listed component:
- vLLM CUDA: v0.10.0.2
- vLLM ROCm: v0.10.0.2
- vLLM Gaudi: v0.8.5
- vLLM Power/Z: v0.10.0.2
- Openvino Model Server: v2025.2.1
For more information, see vllm
in GitHub.
Chapter 3. Technology Preview features Copy linkLink copied to clipboard!
This section describes Technology Preview features in Red Hat OpenShift AI 2.24. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
- Build Generative AI Apps with Llama Stack on OpenShift AI
With this release, the Llama Stack Technology Preview feature on OpenShift AI enables Retrieval-Augmented Generation (RAG) and agentic workflows for building next-generation generative AI applications. It supports remote inference, built-in embeddings, and vector database operations. It also integrates with providers like TrustyAI’s provider for safety and Trusty AI’s LM-Eval provider for evaluation.
This preview includes tools, components, and guidance for enabling the Llama Stack Operator, interacting with the RAG Tool, and automating PDF ingestion and keyword search capabilities to enhance document discovery.
- Centralized platform metrics and tracing
- Centralized platform metrics and tracing are now available as a Technology Preview feature in OpenShift AI. This feature enables integration with the Cluster Observability Operator (COO), Red Hat build of OpenTelemetry, and Tempo Operator, and provides optional out-of-the-box observability configurations for OpenShift AI. It also introduces a dedicated observability stack. Future releases will collect infrastructure and customer workload signals in the dedicated observability stack.
- Support for Llama Stack Distribution version 0.2.17
The Llama Stack Distribution now includes Llama-stack version 0.2.17 as Technology Preview. This feature brings a number of capabilities, including:
- Model providers: Self-hosted providers like vLLM are now automatically registered, so you no longer need to manually set INFERENCE_MODEL variables.
- Infrastructure and backends: Improved the OpenAI inference and added support for the Vector Store API.
- Error handling: Errors are now standardized, and library client initialization has been improved.
- Access Control: The Vector Store and File APIs now enforce access control, and telemetry read APIs are gated by user roles.
- Bug fixes.
- Support for IBM Power accelerated Triton Inference Server
You can now enable Power architecture support for Triton inference server (CPU only) with Python and ONNX backend. You can deploy Triton inference server as a custom model serving runtime on IBM Power architecture as a Technology Preview feature in Red Hat OpenShift AI.
For details, see Triton Inference Server image.
- Support for IBM Z accelerated Triton Inference Server
You can now enable Z architecture support for the Triton Inference Server (Telum I/Telum II) with multiple backend options, including ONNX-MLIR, Snap ML (C++), and PyTorch. The Triton Inference Server can be deployed as a custom model serving runtime on IBM Z architecture as a Technology Preview feature in Red Hat OpenShift AI.
For details, see IBM Z accelerated Triton Inference Server.
- Support for Kubernetes Event-driven Autoscaling (KEDA)
OpenShift AI now supports Kubernetes Event-driven Autoscaling (KEDA) in its standard deployment mode. This Technology Preview feature enables metrics-based autoscaling for inference services, allowing for more efficient management of accelerator resources, reduced operational costs, and improved performance for your inference services.
To set up autoscaling for your inference service in standard deployments, you need to install and configure the OpenShift Custom Metrics Autoscaler (CMA), which is based on KEDA.
For more information about this feature, see: Configuring metrics-based autoscaling.
- LM-Eval model evaluation UI feature
- TrustyAI now offers a user-friendly UI for LM-Eval model evaluations as Technology Preview. This feature allows you to input evaluation parameters for a given model and returns an evaluation-results page, all from the UI.
- Use Guardrails Orchestrator with LlamaStack
You can now run detections using the Guardrails Orchestrator tool from TrustyAI with Llama Stack as a Technology Preview feature, using the built-in detection component. To use this feature, ensure TrustyAI is enabled, the FMS Orchestrator and detectors are set up, and KServe RawDeployment mode is in use for full compatibility if needed. There is no manual set up required.
Then, in the
DataScienceCluster
custom resource for the Red Hat OpenShift AI Operator, set thespec.llamastackoperator.managementState
field toManaged
.For more information, see the following resources on GitHub:
- Define and manage pipelines with Kubernetes API
You can now define and manage data science pipelines and pipeline versions by using the Kubernetes API, which stores them as custom resources in the cluster instead of the internal database. This Technology Preview feature makes it easier to use OpenShift GitOps (Argo CD) or similar tools to manage pipelines, while still allowing you to manage them through the OpenShift AI user interface, API, and
kfp
SDK.This option, enabled by default, is configurable with the Store pipeline definitions in Kubernetes checkbox when you create or edit a pipeline server. Cluster administrators can also configure this option by setting the
spec.apiServer.pipelineStore
field tokubernetes
ordatabase
in theDataSciencePipelinesApplication
(DSPA) custom resource. For more information, see Defining a pipeline by using the Kubernetes API.- Model customization with LAB-tuning
LAB-tuning is now available as a Technology Preview feature, enabling data scientists to run an end-to-end workflow for customizing large language models (LLMs). The LAB (Large-scale Alignment for chatBots) method offers a more efficient alternative to traditional fine-tuning by leveraging taxonomy-guided synthetic data generation (SDG) and a multi-phase training approach.
Data scientists can run LAB-tuning workflows directly from the OpenShift AI dashboard by using the new preconfigured InstructLab pipeline, which simplifies the tuning process. For details on enabling and using LAB-tuning, see Enabling LAB-tuning and Customizing models with LAB-tuning.
ImportantThe LAB-tuning feature is not currently supported for disconnected environments.
- Red Hat OpenShift AI Model Catalog
The Red Hat OpenShift AI Model Catalog is now available as a Technology Preview feature. This functionality starts with connecting users with the Granite family of models, as well as the teacher and judge models used in LAB-tuning.
NoteThe model catalog feature is not currently supported for disconnected environments.
- New Feature Store component
You can now install and manage Feature Store as a configurable component in the Red Hat OpenShift AI Operator. Based on the open-source Feast project, Feature Store acts as a bridge between ML models and data, enabling consistent and scalable feature management across the ML lifecycle.
This Technology Preview release introduces the following capabilities:
- Centralized feature repository for consistent feature reuse
- Python SDK and CLI for programmatic and command-line interactions to define, manage, and retrieve features for ML models
- Feature definition and management
- Support for a wide range of data sources
- Data ingestion via feature materialization
- Feature retrieval for both online model inference and offline model training
- Role-Based Access Control (RBAC) to protect sensitive features
- Extensibility and integration with third-party data and compute providers
- Scalability to meet enterprise ML needs
- Searchable feature catalog
Data lineage tracking for enhanced observability
For configuration details, see Configuring Feature Store.
- IBM Power and IBM Z architecture support
- IBM Power (ppc64le) and IBM Z (s390x) architectures are now supported as a Technology Preview feature. Currently, you can only deploy models in standard mode on these architectures.
- Support for vLLM in IBM Power and IBM Z architectures
- vLLM runtime templates are available for use in IBM Power and IBM Z architectures as Technology Preview.
- Enable targeted deployment of workbenches to specific worker nodes in Red Hat OpenShift AI Dashboard using node selectors
Hardware profiles are now available as a Technology Preview. The hardware profiles feature enables users to target specific worker nodes for workbenches or model-serving workloads. It allows users to target specific accelerator types or CPU-only nodes.
This feature replaces the current accelerator profiles feature and container size selector field, offering a broader set of capabilities for targeting different hardware configurations. While accelerator profiles, taints, and tolerations provide some capabilities for matching workloads to hardware, they do not ensure that workloads land on specific nodes, especially if some nodes lack the appropriate taints.
The hardware profiles feature supports both accelerator and CPU-only configurations, along with node selectors, to enhance targeting capabilities for specific worker nodes. Administrators can configure hardware profiles in the settings menu. Users can select the enabled profiles using the UI for workbenches, model serving, and Data Science Pipelines where applicable.
- RStudio Server workbench image
With the RStudio Server workbench image, you can access the RStudio IDE, an integrated development environment for R. The R programming language is used for statistical computing and graphics to support data analysis and predictions.
To use the RStudio Server workbench image, you must first build it by creating a secret and triggering the
BuildConfig
, and then enable it in the OpenShift AI UI by editing therstudio-rhel9
image stream. For more information, see Building the RStudio Server workbench images.ImportantDisclaimer: Red Hat supports managing workbenches in OpenShift AI. However, Red Hat does not provide support for the RStudio software. RStudio Server is available through rstudio.org and is subject to their licensing terms. You should review their licensing terms before you use this sample workbench.
- CUDA - RStudio Server workbench image
With the CUDA - RStudio Server workbench image, you can access the RStudio IDE and NVIDIA CUDA Toolkit. The RStudio IDE is an integrated development environment for the R programming language for statistical computing and graphics. With the NVIDIA CUDA toolkit, you can enhance your work by using GPU-accelerated libraries and optimization tools.
To use the CUDA - RStudio Server workbench image, you must first build it by creating a secret and triggering the
BuildConfig
, and then enable it in the OpenShift AI UI by editing therstudio-rhel9
image stream. For more information, see Building the RStudio Server workbench images.ImportantDisclaimer: Red Hat supports managing workbenches in OpenShift AI. However, Red Hat does not provide support for the RStudio software. RStudio Server is available through rstudio.org and is subject to their licensing terms. You should review their licensing terms before you use this sample workbench.
The CUDA - RStudio Server workbench image contains NVIDIA CUDA technology. CUDA licensing information is available in the CUDA Toolkit documentation. You should review their licensing terms before you use this sample workbench.
- Model Registry
- OpenShift AI now supports the Model Registry Operator. The Model Registry Operator is not installed by default in Technology Preview mode. The model registry is a central repository that contains metadata related to machine learning models from inception to deployment.
- Support for multinode deployment of very large models
- Serving models over multiple graphical processing unit (GPU) nodes when using a single-model serving runtime is now available as a Technology Preview feature. Deploy your models across multiple GPU nodes to improve efficiency when deploying large models such as large language models (LLMs). For more information, see Deploying models across multiple GPU nodes.
Chapter 4. Developer Preview features Copy linkLink copied to clipboard!
This section describes Developer Preview features in Red Hat OpenShift AI 2.24. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to functionality in advance of possible inclusion in a Red Hat product offering. Customers can use these features to test functionality and provide feedback during the development process. Developer Preview features might not have any documentation, are subject to change or removal at any time, and have received limited testing. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
For more information about the support scope of Red Hat Developer Preview features, see Developer Preview Support Scope.
- Distributed Inference with LLM-D
Distributed Inference with LLM-D is now available as a Developer Preview feature. Distributed Inference with LLM-D supports multi-model serving, intelligent inference scheduling, and disaggregated serving for improved GPU utilization on GenAI models.
For more information, see Deploying a model by using the LLM Inference Service (LLM-D).
- Run evaluations for TrustyAI-Llama Stack using LM-Eval
You can now run evaluations using LM-Eval on Llama Stack with TrustyAI as a Developer Preview feature, using the built-in LM-Eval component and advanced content moderation tools. To use this feature, ensure TrustyAI is enabled, the FMS Orchestrator and detectors are set up, and KServe RawDeployment mode is in use for full compatibility if needed. There is no manual set up required.
Then, in the
DataScienceCluster
custom resource for the Red Hat OpenShift AI Operator, set thespec.llamastackoperator.managementState
field toManaged
.For more information, see the following resources on GitHub:
- LLM Compressor integration
LLM Compressor capabilities are now available in Red Hat OpenShift AI as a Developer Preview feature. A new workbench image with the
llm-compressor
library and a corresponding data science pipelines runtime image make it easier to compress and optimize your large language models (LLMs) for efficient deployment with vLLM. For more information, seellm-compressor
in GitHub.You can use LLM Compressor capabilities in two ways:
-
Use a Jupyter notebook with the workbench image available at Red Hat Quay.io:
opendatahub / llmcompressor-workbench
.
For an example Jupyter notebook, seeexamples/llmcompressor/workbench_example.ipynb
in thered-hat-ai-examples
repository. -
Run a data science pipeline that executes model compression as a batch process with the runtime image available at Red Hat Quay.io:
opendatahub / llmcompressor-pipeline-runtime
.
For an example pipeline, seeexamples/llmcompressor/oneshot_pipeline.py
in thered-hat-ai-examples
repository.
-
Use a Jupyter notebook with the workbench image available at Red Hat Quay.io:
- Support for AppWrapper in Kueue
- AppWrapper support in Kueue is available as a Developer Preview feature. The experimental API enables the use of AppWrapper-based workloads with the distributed workloads feature.
Chapter 5. Support removals Copy linkLink copied to clipboard!
This section describes major changes in support for user-facing features in Red Hat OpenShift AI. For information about OpenShift AI supported software platforms, components, and dependencies, see the Red Hat OpenShift AI: Supported Configurations Knowledgebase article.
5.1. Upcoming deprecations Copy linkLink copied to clipboard!
5.1.1. Upcoming deprecation of LAB-tuning Copy linkLink copied to clipboard!
The LAB-tuning feature, currently available as a Technology Preview, is planned for deprecation in a later release. If you are using LAB-tuning for large language model customization, plan to migrate to alternative fine-tuning or model customization methods as they become available.
5.2. Deprecated Copy linkLink copied to clipboard!
5.2.1. Deprecated CodeFlare Operator Copy linkLink copied to clipboard!
Starting with OpenShift AI 2.24, the CodeFlare Operator is deprecated and will be removed in a future release of OpenShift AI.
5.2.2. Deprecated embedded Kueue component Copy linkLink copied to clipboard!
Starting with OpenShift AI 2.24, the embedded Kueue component for managing distributed workloads is deprecated. OpenShift AI now uses the Red Hat Build of Kueue Operator to provide enhanced workload scheduling across distributed training, workbench, and model serving workloads. The deprecated embedded Kueue component is not supported in any Extended Update Support (EUS) release. To ensure workloads continue using queue management, you must migrate from the embedded Kueue component to the Red Hat Build of Kueue Operator, which requires OpenShift Container Platform 4.18 or later. To migrate, complete the following steps:
- Install the Red Hat Build of Kueue Operator from OperatorHub.
-
Edit your
DataScienceCluster
custom resource to set thespec.components.kueue.managementState
field toUnmanaged
. -
Verify that existing Kueue configurations (
ClusterQueue
andLocalQueue
) are preserved after migration.
For detailed instructions, see Migrating to the Red Hat build of Kueue Operator.
This deprecation does not affect the Red Hat OpenShift AI API tiers.
5.2.3. Multi-model serving platform (ModelMesh) Copy linkLink copied to clipboard!
Starting with OpenShift AI version 2.19, the multi-model serving platform based on ModelMesh is deprecated. You can continue to deploy models on the multi-model serving platform, but it is recommended that you migrate to the single-model serving platform.
For more information or for help on using the single-model serving platform, contact your account manager.
5.2.4. Deprecated Text Generation Inference Server (TGIS) Copy linkLink copied to clipboard!
Starting with OpenShift AI version 2.19, the Text Generation Inference Server (TGIS) is deprecated. TGIS will continue to be supported through the OpenShift AI 2.16 EUS lifecycle. Caikit-TGIS and Caikit are not affected and will continue to be supported. The out-of-the-box serving runtime template will no longer be deployed. vLLM is recommended as a replacement runtime for TGIS.
5.2.5. Deprecated accelerator profiles Copy linkLink copied to clipboard!
Accelerator profiles are now deprecated. To target specific worker nodes for workbenches or model serving workloads, use hardware profiles.
5.2.6. Deprecated OpenVINO Model Server (OVMS) plugin Copy linkLink copied to clipboard!
The CUDA plugin for the OpenVINO Model Server (OVMS) is now deprecated and will no longer be available in future releases of OpenShift AI.
5.2.7. OpenShift AI dashboard user management moved from OdhDashboardConfig to Auth resource Copy linkLink copied to clipboard!
Previously, cluster administrators used the groupsConfig
option in the OdhDashboardConfig
resource to manage the OpenShift groups (both administrators and non-administrators) that can access the OpenShift AI dashboard. Starting with OpenShift AI 2.17, this functionality has moved to the Auth
resource. If you have workflows (such as GitOps workflows) that interact with OdhDashboardConfig
, you must update them to reference the Auth
resource instead.
Resource | 2.16 and earlier | 2.17 and later versions |
---|---|---|
|
|
|
|
|
|
|
|
|
Admin groups |
|
|
User groups |
|
|
5.2.8. Deprecated cluster configuration parameters Copy linkLink copied to clipboard!
When using the CodeFlare SDK to run distributed workloads in Red Hat OpenShift AI, the following parameters in the Ray cluster configuration are now deprecated and should be replaced with the new parameters as indicated.
Deprecated parameter | Replaced by |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
You can also use the new extended_resource_mapping
and overwrite_default_resource_mapping
parameters, as appropriate. For more information about these new parameters, see the CodeFlare SDK documentation (external).
5.3. Removed functionality Copy linkLink copied to clipboard!
5.3.1. Microsoft SQL Server command-line tool removal Copy linkLink copied to clipboard!
Starting with OpenShift AI 2.24, the Microsoft SQL Server command-line tools (sqlcmd, bcp) have been removed from workbenches. You can no longer manage Microsoft SQL Server using the preinstalled command-line client.
5.3.2. Model registry ML Metadata (MLMD) server removal Copy linkLink copied to clipboard!
Starting with OpenShift AI 2.23, the ML Metadata (MLMD) server has been removed from the model registry component. The model registry now interacts directly with the underlying database by using the existing model registry API and database schema. This change simplifies the overall architecture and ensures the long-term maintainability and efficiency of the model registry by transitioning from the ml-metadata
component to direct database access within the model registry itself.
If you see the following error for your model registry deployment, this means that your database schema migration has failed:
error: error connecting to datastore: Dirty database version {version}. Fix and force version.
error: error connecting to datastore: Dirty database version {version}. Fix and force version.
You can fix this issue by manually changing the database from a dirty state to 0 before traffic can be routed to the pod. Perform the following steps:
Find the name of your model registry database pod as follows:
kubectl get pods -n <your-namespace> | grep model-registry-db
Replace
<your-namespace>
with the namespace where your model registry is deployed.Use
kubectl exec
to run the query on the model registry database pod as follows:kubectl exec -n <your-namespace> <your-db-pod-name> -c mysql -- mysql -u root -p"$MYSQL_ROOT_PASSWORD" -e "USE <your-db-name>; UPDATE schema_migrations SET dirty = 0;"
Replace
<your-namespace>
with your model registry namespace and<your-db-pod-name>
with the pod name that you found in the previous step. Replace<your-db-name>
with your model registry database name.This will reset the dirty state in the database, allowing the model registry to start correctly.
5.3.3. Embedded subscription channel not used in some versions Copy linkLink copied to clipboard!
For OpenShift AI 2.8 to 2.20 and 2.22 to 2.24, the embedded
subscription channel is not used. You cannot select the embedded
channel for a new installation of the Operator for those versions. For more information about subscription channels, see Installing the Red Hat OpenShift AI Operator.
5.3.4. Anaconda removal Copy linkLink copied to clipboard!
Anaconda is an open source distribution of the Python and R programming languages. Starting with OpenShift AI version 2.18, Anaconda is no longer included in OpenShift AI, and Anaconda resources are no longer supported or managed by OpenShift AI.
If you previously installed Anaconda from OpenShift AI, a cluster administrator must complete the following steps from the OpenShift command-line interface to remove the Anaconda-related artifacts:
Remove the secret that contains your Anaconda password:
oc delete secret -n redhat-ods-applications anaconda-ce-access
Remove the
ConfigMap
for the Anaconda validation cronjob:oc delete configmap -n redhat-ods-applications anaconda-ce-validation-result
Remove the Anaconda image stream:
oc delete imagestream -n redhat-ods-applications s2i-minimal-notebook-anaconda
Remove the Anaconda job that validated the downloading of images:
oc delete job -n redhat-ods-applications anaconda-ce-periodic-validator-job-custom-run
Remove any pods related to Anaconda cronjob runs:
oc get pods n redhat-ods-applications --no-headers=true | awk '/anaconda-ce-periodic-validator-job-custom-run*/'
5.3.5. Data science pipelines v1 support removed Copy linkLink copied to clipboard!
Previously, data science pipelines in OpenShift AI were based on KubeFlow Pipelines v1. Starting with OpenShift AI 2.9, data science pipelines are based on KubeFlow Pipelines v2, which uses a different workflow engine. Data science pipelines 2.0 is enabled and deployed by default in OpenShift AI.
Starting with OpenShift AI 2.16, data science pipelines 1.0 resources are no longer supported or managed by OpenShift AI. It is no longer possible to deploy, view, or edit the details of pipelines that are based on data science pipelines 1.0 from either the dashboard or the KFP API server.
OpenShift AI does not automatically migrate existing data science pipelines 1.0 instances to 2.0. If you are upgrading to OpenShift AI 2.16 or later, you must manually migrate your existing data science pipelines 1.0 instances. For more information, see Migrating to data science pipelines 2.0.
Data science pipelines 2.0 contains an installation of Argo Workflows. Red Hat does not support direct customer use of this instance of Argo Workflows. To install or upgrade to OpenShift AI 2.16 or later with data science pipelines 2.0, ensure that there is no existing installation of Argo Workflows on your cluster.
5.3.6. Pipeline logs for Python scripts running in Elyra pipelines are no longer stored in S3 Copy linkLink copied to clipboard!
Logs are no longer stored in S3-compatible storage for Python scripts which are running in Elyra pipelines. From OpenShift AI version 2.11, you can view these logs in the pipeline log viewer in the OpenShift AI dashboard.
For this change to take effect, you must use the Elyra runtime images provided in workbench images at version 2024.1 or later.
If you have an older workbench image version, update the Version selection field to a compatible workbench image version, for example, 2024.1, as described in Updating a project workbench.
Updating your workbench image version will clear any existing runtime image selections for your pipeline. After you have updated your workbench version, open your workbench IDE and update the properties of your pipeline to select a runtime image.
5.3.7. Beta subscription channel no longer used Copy linkLink copied to clipboard!
Starting with OpenShift AI 2.5, the beta
subscription channel is no longer used. You can no longer select the beta
channel for a new installation of the Operator. For more information about subscription channels, see Installing the Red Hat OpenShift AI Operator.
5.3.8. HabanaAI workbench image removal Copy linkLink copied to clipboard!
Support for the HabanaAI 1.10 workbench image has been removed. New installations of OpenShift AI from version 2.14 do not include the HabanaAI workbench image. However, if you upgrade OpenShift AI from a previous version, the HabanaAI workbench image remains available, and existing HabanaAI workbench images continue to function.
Chapter 6. Resolved issues Copy linkLink copied to clipboard!
The following notable issues are resolved in Red Hat OpenShift AI 2.24. Security updates, bug fixes, and enhancements for Red Hat OpenShift AI 2.24 are released as asynchronous errata. All OpenShift AI errata advisories are published on the Red Hat Customer Portal.
6.1. Issues resolved in Red Hat OpenShift AI 2.24 Copy linkLink copied to clipboard!
OCPBUGS-44432 - ImageStream unable to import image tags in a disconnected OpenShift environment
Before this update, if you used the ImageTagMirrorSet
(ITMS) or ImageDigestMirrorSet
(IDMS) in a disconnected OpenShift environment, the ImageStream resource prevented the mirror from importing the image, and a RHOAI workbench instance could not be created. This issue is now resolved in OpenShift Container Platform 4.19.13 or later. Update your OpenShift instances to 4.19.13 or later to avoid this issue.
RHOAIENG-29729 - Model registry Operator in a restart loop after upgrade
After upgrading from OpenShift AI 2.22 or earlier with the model registry component enabled, the model registry Operator could enter a restart loop. This was due to an insufficient memory limit for the manager container in the model-registry-operator-controller-manager
pod. This issue is now resolved.
RHOAIENG-31248 - KServe http: TLS handshake error
Previously, the OpenShift CA auto-injection in the localmodelcache
validation webhook configuration was missing the necessary annotation, leading to repeated TLS handshake errors. This issue is now resolved.
RHOAIENG-31376 - Inference service creation using vLLM runtime fails on IBM Power cluster
Previously, when you attempted to create an inference service using the vLLM runtime on an IBM Power cluster, it failed with the following error: OpNamespace' '_C_utils' object has no attribute 'init_cpu_threads_env
error. This issue is now resolved.
RHOAIENG-31377 - Inference service creation fails on IBM Power cluster
Previously, when you attempted to create an inference service using the vLLM runtime on an IBM Power cluster, it failed with the following error: ValueError: 'aimv2' is already used by a Transformers config, pick another name
. This issue is now resolved.
RHOAIENG-31498 - Incorrect inference URL in LlamaStack LMEval provider
Before this update, when you ran evaluations on Llama Stack using the LMEval provider, the evaluation jobs erroneously used the model server endpoint as v1/openai/v1/completions
. This resulted in a job failure because the correct model server endpoint was v1/completions
. This issue is now resolved.
RHOAIENG-31536 - Prometheus configuration not reconciled properly
Before this update, the Monitoring resource did not reconcile properly and showed a "Not Ready" status when upgrading to or installing 2.23. This issue occurred because the resource required the OpenTelemetry and Cluster Observability Operators to be installed, even if no new monitoring or tracing configurations were added to the DSCInitialization
resource. As a result, Prometheus configuration did not reconcile and led to empty or outdated alert configurations. This issue is now resolved.
RHOAIENG-4148 - Standalone notebook fails to start due to character length
Previously, the notebook controller logic did not proactively check username lengths before it attempted to create resources. The notebook controller creates OpenShift resources using your username directly. As a result, if the combined name of the OpenShift Route and namespace exceeded the 63-character limit for DNS subdomains, the creation of the OpenShift Route failed with the following validation error: spec.host: ... must be no more than 63 characters
. Without the Route, the dependent OAuthClient could not be configured, and workbenches could not start.
With this release, the notebook controller’s logic has been updated to proactively check name character lengths before creating resources. For Routes, if the combined length of the notebook name and namespace would exceed the 63-character limit, the controller now creates the Route using the generateName
field with a prefix of nb-
. For StatefulSets, if the notebook name is longer than 52 characters, the controller also uses generateName: "nb-"
to prevent naming conflicts.
RHOAIENG-3913 - Red Hat OpenShift AI Operator incorrectly shows Degraded
condition of False
with an error
Previously, if you had enabled the KServe component in the DataScienceCluster (DSC) object used by the OpenShift AI Operator, but had not installed the dependent Red Hat OpenShift Service Mesh and Red Hat OpenShift Serverless Operators, the kserveReady
condition in the DSC object correctly showed that KServe is not ready. However, the Degraded
condition incorrectly showed a value of False
. This issue is now resolved.
RHOAIENG-29352 - Missing Documentation and Support menu items
Previously, in the OpenShift AI top navigation bar, when you clicked the help icon (
), the menu contained only the About menu item and the Documentation and Support menu items were missing. This issue is now resolved.
RHAIENG-496 - Error creating LlamaStackDistribution as a non-administrator user
Previously, non-administrator requests failed due to insufficient role-based access control (RBAC) as the deployed role definitions were outdated or incomplete for the current Llama Stack resources (for example, the LlamaStackDistribution
CRD). This issue is now resolved.
Chapter 7. Known issues Copy linkLink copied to clipboard!
This section describes known issues in Red Hat OpenShift AI 2.24 and any known methods of working around these issues.
RHOAIENG-35623 - Model deployment fails when using hardware profiles
Model deployments that use hardware profiles fail because the Red Hat OpenShift AI Operator does not inject the tolerations
, nodeSelector
, or identifiers
from the hardware profile into the underlying InferenceService
when manually creating InferenceService
resources. As a result, the model deployment pods cannot be scheduled to suitable nodes and the deployment fails to enter a ready state. Workbenches that use the same hardware profile continue to deploy successfully.
- Workaround
-
Run a script to manually inject the
tolerations
,nodeSelector
, oridentifiers
from the hardware profile into the underlyingInferenceService
as described in the Knowledgebase solution Workaround for model deployment failure when using hardware profiles.
RHOAIENG-33995 - Deployment of an inference service for Phi and Mistral models fails
The creation of an inference service for Phi and Mistral models using vLLM runtime on IBM Power cluster with openshift-container-platform 4.19 fails due to an error related to CPU backend. As a result, deployment of these models is affected, causing inference service creation failure.
- Workaround
-
To resolve this issue, disable the
sliding_window
mechanism in the serving runtime if it is enabled for CPU and Phi models. Sliding window is not currently supported in V1.
RHOAIENG-33914 - LM-Eval Tier2 task test failures
There can be some failures with LM-Eval Tier2 task tests because the Massive Multitask Language Understanding Symbol Replacement (MMLUSR) tasks are broken, if you are using an older version of the trustyai-service-operator
.
- Workaround
-
Ensure that the latest version of the
trustyai-service-operator
is installed.
RHOAIENG-33795 - Manual Route
creation needed for gRPC endpoint verification for Triton Inference Server on IBM Z
When verifying Triton Inference Server with gRPC endpoint, Route
does not get created automatically. This happens because the Operator currently defaults to creating an edge-terminated route for REST only.
- Workaround
To resolve this issue, manual Route creation is needed for gRPC endpoint verification for Triton Inference Server on IBM Z.
When the model deployment pod is up and running, define an edge-terminated
Route
object in a YAML file with the following contents:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
Route
object:oc apply -f <route-file-name>.yaml
oc apply -f <route-file-name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To send an inference request, enter the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- <ca_cert_file> is the path to your cluster router CA cert (for example, router-ca.crt).
<triton_protoset_file> is compiled as a protobuf descriptor file. You can generate it as protoc -I. --descriptor_set_out=triton_desc.pb --include_imports grpc_service.proto
.
Download grpc_service.proto
and model_config.proto
files from the triton-inference-service GitHub page.
RHOAIENG-33697 - Unable to Edit or Delete models unless status is "Started"
When you deploy a model on the NVIDIA NIM or single-model serving platform, the Edit and Delete options in the action menu are not available for models in the Starting or Pending states. These options become available only after the model has been successfully deployed.
- Workaround
- Wait until the model is in the Started state to make any changes or to delete the model.
RHOAIENG-33645 - LM-Eval Tier1 test failures
There can be failures with LM-Eval Tier1 tests because confirm_run_unsafe_code
is not passed as an argument when a job is run, if you are using an older version of the trustyai-service-operator
.
- Workaround
-
Ensure that you are using the latest version of the
trustyai-service-operator
and thatAllowCodeExecution
is enabled.
RHOAIENG-32942 - Elyra pipelines fail when pipeline store is set to Kubernetes
When the pipeline store is configured to use Kubernetes, Elyra requires equality (eq
) filters that are not supported by the REST API. Only substring filters are supported in this mode. As a result, pipelines created and submitted through Elyra from a workbench cannot run successfully. Submissions fail with the following error:
Invalid input error: Filter eq is not implemented for Kubernetes pipeline store.
Invalid input error: Filter eq is not implemented for Kubernetes pipeline store.
- Workaround
Configure the pipeline server to use the database instead of Kubernetes for storing pipelines:
-
When creating a pipeline server, set the pipeline store to
database
. -
If the server is already created, update the
DataSciencePipelinesApplication
custom resource by setting.spec.pipelineStore
todatabase
. This triggers thedspa
pod to be recreated.
-
When creating a pipeline server, set the pipeline store to
After switching the pipeline store to database
, Elyra pipelines can be submitted successfully from a workbench.
RHOAIENG-32897 - Pipelines defined with the Kubernetes API and invalid platformSpec
do not appear in the UI or run
When a pipeline version defined with the Kubernetes API includes an empty or invalid spec.platformSpec
field (for example, {} or missing the kubernetes
key), the system misidentifies the field as the pipeline specification. As a result, the REST API omits the pipelineSpec
, which prevents the pipeline version from being displayed in the UI and from running.
- Workaround
-
Remove the
spec.platformSpec
field from thePipelineVersion
object. After removing the field, the pipeline version is displayed correctly in the UI and the REST API returns thepipelineSpec
as expected.
RHOAIENG-31386 - Error deploying an Inference Service with authenticationRef
When deploying an InferenceService
with authenticationRef
under external metrics, the authenticationRef
field is removed after the first oc
apply.
- Workaround
- Re-apply the resource to retain the field.
RHOAIENG-30493 - Error creating a workbench in a Kueue-enabled project
When using the dashboard to create a workbench in a Kueue-enabled project, the creation fails if Kueue is disabled on the cluster or if the selected hardware profile is not associated with a LocalQueue. In this case, the required LocalQueue cannot be referenced, the admission webhook validation fails, and the following error message is shown:
Error creating workbench admission webhook "kubeflow-kueuelabels-validator.opendatahub.io" denied the request: Kueue label validation failed: missing required label "kueue.x-k8s.io/queue-name"
Error creating workbench
admission webhook "kubeflow-kueuelabels-validator.opendatahub.io" denied the request: Kueue label validation failed: missing required label "kueue.x-k8s.io/queue-name"
- Workaround
Enable Kueue and hardware profiles on your cluster as a user with cluster-admin permissions:
- Log in to your cluster by using the oc client.
-
Run the following command to patch the
OdhDashboardConfig
custom resource in theredhat-ods-applications
namespace:
oc patch OdhDashboardConfig odh-dashboard-config -n redhat-ods-applications --type=merge -p '{"spec": {"dashboardConfig": {"disableKueue": false, "disableHardwareProfiles": false}}}'
oc patch OdhDashboardConfig odh-dashboard-config -n redhat-ods-applications --type=merge -p '{"spec": {"dashboardConfig": {"disableKueue": false, "disableHardwareProfiles": false}}}'
RHOAIENG-31238 - New observability stack enabled when creating DSCInitialization
When you remove a DSCInitialization resource and create a new one using OpenShift AI console form view, it enables a Technology Preview observability stack. This results in the deployment of an unwanted observability stack when recreating a DSCInitialization resource.
- Workaround
To resolve this issue, manually remove the "metrics" and "traces" fields when recreating the DSCInitiliazation resource using the form view.
This is not required if you want to use the Technology Preview observability stack.
RHOAIENG-32145 - Llama Stack Operator deployment failures on OpenShift versions earlier than 4.17
When installing OpenShift AI on OpenShift clusters running versions earlier than 4.17, the integrated Llama Stack Operator (llamastackoperator
) might fail to deploy.
The Llama Stack Operator requires Kubernetes version 1.32 or later, but OpenShift 4.15 uses Kubernetes 1.28. This version gap can cause schema validation failures when applying the LlamaStackDistribution
custom resource definition (CRD), due to unsupported selectable fields introduced in Kubernetes 1.32.
- Workaround
- Install OpenShift AI on an OpenShift cluster running version 4.17 or later.
RHOAIENG-32242 - Failure on creating NetworkPolicies for OpenShift versions 4.15 and 4.16
When installing OpenShift AI on OpenShift clusters running versions 4.15 or 4.16, deployment of certain NetworkPolicy
resources might fail. This can occur when the llamastackoperator
or related components attempt to create a NetworkPolicy
in a protected namespace, such as redhat-ods-applications
. The request can be blocked by the admission webhook networkpolicies-validation.managed.openshift.io
, which restricts modifications to certain namespaces and resources, even for cluster-admin
users. This restriction can apply to both self-managed and Red Hat–managed OpenShift environments.
- Workaround
- Deploy OpenShift AI on an OpenShift cluster running version 4.17 or later. For clusters where the webhook restriction is enforced, contact your OpenShift administrator or Red Hat Support to determine an alternative deployment pattern or approved change to the affected namespace.
RHOAIENG-32599 - Inference service creation fails on IBM Z cluster
When you attempt to create an inference service using the vLLM runtime on an IBM Z cluster, it fails with the following error: ValueError: 'aimv2' is already used by a Transformers config, pick another name
.
- Workaround
- None.
RHOAIENG-29731 - Inference service creation fails on IBM Power cluster with OpenShift 4.19
When you attempt to create an inference service by using the vLLM runtime on an IBM Power cluster on OpenShift Container Platform version 4.19, it fails due to an error related to Non-Uniform Memory Access (NUMA).
- Workaround
-
When you create an inference service, set the environment variable
VLLM_CPU_OMP_THREADS_BIND
toall
.
RHOAIENG-29292 - vLLM logs permission errors on IBM Z due to usage stats directory access
When running vLLM on the IBM Z architecture, the inference service starts successfully, but logs an error in a background thread related to usage statistics reporting. This happens because the service tries to write usage data to a restricted location (/.config
), which it does not have permission to access.
The following error appears in the logs:
Exception in thread Thread-2 (_report_usage_worker): Traceback (most recent call last): ... PermissionError: [Error 13] Permission denied: '/.config'
Exception in thread Thread-2 (_report_usage_worker):
Traceback (most recent call last):
...
PermissionError: [Error 13] Permission denied: '/.config'
- Workaround
-
To prevent this error and suppress the usage stats logging, set the
VLLM_NO_USAGE_STATS=1
environment variable in the inference service deployment. This disables automatic usage reporting, avoiding permission issues when you write to system directories.
RHOAIENG-28910 - Unmanaged KServe resources are deleted after upgrading from 2.16 to 2.19 or later
During the upgrade from OpenShift AI 2.16 to 2.24, the FeatureTracker
custom resource (CR) is deleted before its owner references are fully removed from associated KServe-related resources. As a result, resources that were originally created by the Red Hat OpenShift AI Operator with a Managed
state and later changed to Unmanaged
in the DataScienceCluster
(DSC) custom resource (CR) might be unintentionally removed. This issue can disrupt model serving functionality until the resources are manually restored.
The following resources might be deleted in 2.24 if they were changed to Unmanaged
in 2.16:
Kind | Namespace | Name |
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
- Workaround
If you have already upgraded from OpenShift AI 2.16 to 2.24, perform one of the following actions:
-
If you have an existing backup, manually recreate the deleted resources without owner references to the
FeatureTracker
CR. If you do not have an existing backup, you can use the Operator to recreate the deleted resources:
- Back up any resources you have already recreated.
In the DSC, set
spec.components.kserve.serving.managementState
toManaged
, and then save the change to allow the Operator to recreate the resources.Wait until the Operator has recreated the resources.
-
In the DSC, set
spec.components.kserve.serving.managementState
back toUnmanaged
, and then save the change. -
Reapply any previous custom changes to the recreated
KnativeServing
,ServiceMeshMember
, andGateway
CRs resources.
If you have not yet upgraded, perform the following actions before upgrading to prevent this issue:
-
In the DSC, set
spec.components.kserve.serving.managementState
toUnmanaged
. -
For each of the affected
KnativeServing
,ServiceMeshMember
, andGateway
resources listed in the above table, edit its CR by deleting theFeatureTracker
owner reference. This edit removes the resource’s dependency on theFeatureTracker
and prevents the deletion of the resource during the upgrade process.
-
If you have an existing backup, manually recreate the deleted resources without owner references to the
RHOAIENG-24545 - Runtime images are not present in the workbench after the first start
The list of runtime images does not properly populate the first running workbench instance in the namespace, therefore no image is shown for selection in the Elyra pipeline editor.
- Workaround
- Restart the workbench. After restarting the workbench, the list of runtime images populates both the workbench and the select box for the Elyra pipeline editor.
RHOAIENG-25090 - InstructLab prerequisites-check-op
task fails when the model registration option is disabled
When you start a LAB-tuning run without selecting the Add model to <model registry name> checkbox, the InstructLab pipeline starts, but the prerequisites-check-op
task fails with the following error in the pod logs:
failed: failed to resolve inputs: the resolved input parameter is null: output_model_name
failed: failed to resolve inputs: the resolved input parameter is null: output_model_name
- Workaround
- Select the Add model to <model registry name> checkbox when you configure the LAB-tuning run.
RHOAIENG-24786 - Upgrading the Authorino Operator from Technical Preview to Stable fails in disconnected environments
In disconnected environments, upgrading the Red Hat Authorino Operator from Technical Preview to Stable fails with an error in the authconfig-migrator-qqttz
pod.
- Workaround
-
Update the Red Hat Authorino Operator to the latest version in the
tech-preview-v1
update channel (v1.1.2). Run the following script:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Update the Red Hat Authorino Operator subscription to use the
stable
update channel. - Select the update option for Authorino 1.2.1.
-
Update the Red Hat Authorino Operator to the latest version in the
RHOAIENG-20209 - Warning message not displayed when requested resources exceed threshold
When you click Distributed workloads → Project metrics and view the Requested resources section, the charts show the requested resource values and the total shared quota value for each resource (CPU and Memory). However, when the Requested by all projects value exceeds the Warning threshold value for that resource, the expected warning message is not displayed.
- Workaround
- None.
SRVKS-1301 (previously documented as RHOAIENG-18590) - The KnativeServing
resource fails after disabling and enabling KServe
After disabling and enabling the kserve
component in the DataScienceCluster, the KnativeServing
resource might fail.
- Workaround
Delete all
ValidatingWebhookConfiguration
andMutatingWebhookConfiguration
webhooks related to Knative:Get the webhooks:
oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration | grep -i knative
oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration | grep -i knative
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure KServe is disabled.
Get the webhooks:
oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration | grep -i knative
oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration | grep -i knative
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Delete the webhooks.
- Enable KServe.
-
Verify that the KServe pod can successfully spawn, and that pods in the
knative-serving
namespace are active and operational.
RHOAIENG-16247 - Elyra pipeline run outputs are overwritten when runs are launched from OpenShift AI dashboard
When a pipeline is created and run from Elyra, outputs generated by the pipeline run are stored in the folder bucket-name/pipeline-name-timestamp
of object storage.
When a pipeline is created from Elyra and the pipeline run is started from the OpenShift AI dashboard, the timestamp value is not updated. This can cause pipeline runs to overwrite files created by previous pipeline runs of the same pipeline.
This issue does not affect pipelines compiled and imported using the OpenShift AI dashboard because runid
is always added to the folder used in object storage. For more information about storage locations used in data science pipelines, see Storing data with data science pipelines.
- Workaround
- When storing files in an Elyra pipeline, use different subfolder names on each pipeline run.
OCPBUGS-49422 - AMD GPUs and AMD ROCm workbench images are not supported in a disconnected environment
This release of OpenShift AI does not support AMD GPUs and AMD ROCm workbench images in a disconnected environment because installing the AMD GPU Operator requires internet access to fetch dependencies needed to compile GPU drivers.
- Workaround
- None.
RHOAIENG-12516 - fast
releases are available in unintended release channels
Due to a known issue with the stream image delivery process, fast
releases are currently available on unintended streaming channels, for example, stable
, and stable-x.y
. For accurate release type, channel, and support lifecycle information, refer to the Life-cycle Dates table on the Red Hat OpenShift AI Self-Managed Life Cycle page.
- Workaround
- None.
RHOAIENG-8294 - CodeFlare error when upgrading OpenShift AI 2.8 to version 2.10 or later
If you try to upgrade OpenShift AI 2.8 to version 2.10 or later, the following error message is shown for the CodeFlare component, due to a mismatch with the AppWrapper
custom resource definition (CRD) version.
ReconcileCompletedWithComponentErrors DataScienceCluster resource reconciled with component errors: 1 error occurred: * CustomResourceDefinition.apiextensions.k8s.io "appwrappers.workload.codeflare.dev" is invalid: status.storedVersions[0]: Invalid value: "v1beta1": must appear in spec.versions
ReconcileCompletedWithComponentErrors DataScienceCluster resource reconciled with component errors: 1 error occurred: * CustomResourceDefinition.apiextensions.k8s.io "appwrappers.workload.codeflare.dev" is invalid: status.storedVersions[0]: Invalid value: "v1beta1": must appear in spec.versions
- Workaround
Delete the existing
AppWrapper
CRD:oc delete crd appwrappers.workload.codeflare.dev
$ oc delete crd appwrappers.workload.codeflare.dev
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for about 20 seconds, and then ensure that a new
AppWrapper
CRD is automatically applied, as shown in the following example:oc get crd appwrappers.workload.codeflare.dev
$ oc get crd appwrappers.workload.codeflare.dev NAME CREATED AT appwrappers.workload.codeflare.dev 2024-11-22T18:35:04Z
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
RHOAIENG-7716 - Pipeline condition group status does not update
When you run a pipeline that has loops (dsl.ParallelFor
) or condition groups (dsl.lf
), the UI displays a Running status for the loops and groups, even after the pipeline execution is complete.
- Workaround
You can confirm if a pipeline is still running by checking that no child tasks remain active.
- From the OpenShift AI dashboard, click Data Science Pipelines → Runs.
- From the Project list, click your data science project.
- From the Runs tab, click the pipeline run that you want to check the status of.
Expand the condition group and click a child task.
A panel that contains information about the child task is displayed
On the panel, click the Task details tab.
The Status field displays the correct status for the child task.
RHOAIENG-6409 - Cannot save parameter
errors appear in pipeline logs for successful runs
When you run a pipeline more than once with data science pipelines 2.0, Cannot save parameter
errors appear in the pipeline logs for successful pipeline runs. You can safely ignore these errors.
- Workaround
- None.
RHOAIENG-12294 (previously documented as RHOAIENG-4812) - Distributed workload metrics exclude GPU metrics
In this release of OpenShift AI, the distributed workload metrics exclude GPU metrics.
- Workaround
- None.
RHOAIENG-4570 - Existing Argo Workflows installation conflicts with install or upgrade
Data science pipelines 2.0 contains an installation of Argo Workflows. Red Hat does not support direct customer use of this instance of Argo Workflows. To install or upgrade OpenShift AI with data science pipelines 2.0, ensure that there is no existing installation of Argo Workflows on your cluster. For more information, see Migrating to data science pipelines 2.0.
- Workaround
-
Remove the existing Argo Workflows installation or set
datasciencepipelines
toRemoved
, and then proceed with the installation or upgrade.
RHOAIENG-3025 - OVMS expected directory layout conflicts with the KServe StoragePuller layout
When you use the OpenVINO Model Server (OVMS) runtime to deploy a model on the single-model serving platform (which uses KServe), there is a mismatch between the directory layout expected by OVMS and that of the model-pulling logic used by KServe. Specifically, OVMS requires the model files to be in the /<mnt>/models/1/
directory, while KServe places them in the /<mnt>/models/
directory.
- Workaround
Perform the following actions:
-
In your S3-compatible storage bucket, place your model files in a directory called
1/
, for example,/<s3_storage_bucket>/models/1/<model_files>
. To use the OVMS runtime to deploy a model on the single-model serving platform, choose one of the following options to specify the path to your model files:
-
If you are using the OpenShift AI dashboard to deploy your model, in the Path field for your data connection, use the
/<s3_storage_bucket>/models/
format to specify the path to your model files. Do not specify the1/
directory as part of the path. -
If you are creating your own
InferenceService
custom resource to deploy your model, configure the value of thestorageURI
field as/<s3_storage_bucket>/models/
. Do not specify the1/
directory as part of the path.
-
If you are using the OpenShift AI dashboard to deploy your model, in the Path field for your data connection, use the
-
In your S3-compatible storage bucket, place your model files in a directory called
KServe pulls model files from the subdirectory in the path that you specified. In this case, KServe correctly pulls model files from the /<s3_storage_bucket>/models/1/
directory in your S3-compatible storage.
RHOAIENG-3018 - OVMS on KServe does not expose the correct endpoint in the dashboard
When you use the OpenVINO Model Server (OVMS) runtime to deploy a model on the single-model serving platform, the URL shown in the Inference endpoint field for the deployed model is not complete.
- Workaround
-
To send queries to the model, you must add the
/v2/models/_<model-name>_/infer
string to the end of the URL. Replace_<model-name>_
with the name of your deployed model.
RHOAIENG-2602 - “Average response time" server metric graph shows multiple lines due to ModelMesh pod restart
The Average response time server metric graph shows multiple lines if the ModelMesh pod is restarted.
- Workaround
- None.
RHOAIENG-2228 - The performance metrics graph changes constantly when the interval is set to 15 seconds
On the Endpoint performance tab of the model metrics screen, if you set the Refresh interval to 15 seconds and the Time range to 1 hour, the graph results change continuously.
- Workaround
- None.
RHOAIENG-2183 - Endpoint performance graphs might show incorrect labels
In the Endpoint performance tab of the model metrics screen, the graph tooltip might show incorrect labels.
- Workaround
- None.
RHOAIENG-131 - gRPC endpoint not responding properly after the InferenceService reports as Loaded
When numerous InferenceService
instances are generated and directed requests, Service Mesh Control Plane (SMCP) becomes unresponsive. The status of the InferenceService
instance is Loaded
, but the call to the gRPC endpoint returns with errors.
- Workaround
-
Edit the
ServiceMeshControlPlane
custom resource (CR) to increase the memory limit of the Istio egress and ingress pods.
RHOAIENG-1619 (previously documented as DATA-SCIENCE-PIPELINES-165) - Poor error message when S3 bucket is not writable
When you set up a data connection and the S3 bucket is not writable, and you try to upload a pipeline, the error message Failed to store pipelines
is not helpful.
- Workaround
- Verify that your data connection credentials are correct and that you have write access to the bucket you specified.
RHOAIENG-1207 (previously documented as ODH-DASHBOARD-1758) - Error duplicating OOTB custom serving runtimes several times
If you duplicate a model-serving runtime several times, the duplication fails with the Serving runtime name "<name>" already exists
error message.
- Workaround
-
Change the
metadata.name
field to a unique value.
RHOAIENG-133 - Existing workbench cannot run Elyra pipeline after workbench restart
If you use the Elyra JupyterLab extension to create and run data science pipelines within JupyterLab, and you configure the pipeline server after you created a workbench and specified a workbench image within the workbench, you cannot execute the pipeline, even after restarting the workbench.
- Workaround
- Stop the running workbench.
- Edit the workbench to make a small modification. For example, add a new dummy environment variable, or delete an existing unnecessary environment variable. Save your changes.
- Restart the workbench.
- In the left sidebar of JupyterLab, click Runtimes.
- Confirm that the default runtime is selected.
RHODS-12798 - Pods fail with "unable to init seccomp" error
Pods fail with CreateContainerError
status or Pending
status instead of Running
status, because of a known kernel bug that introduced a seccomp
memory leak. When you check the events on the namespace where the pod is failing, or run the oc describe pod
command, the following error appears:
runc create failed: unable to start container process: unable to init seccomp: error loading seccomp filter into kernel: error loading seccomp filter: errno 524
runc create failed: unable to start container process: unable to init seccomp: error loading seccomp filter into kernel: error loading seccomp filter: errno 524
- Workaround
-
Increase the value of
net.core.bpf_jit_limit
as described in the Red Hat Knowledgebase solution Pods failing with error loading seccomp filter into kernel: errno 524 in OpenShift 4.
KUBEFLOW-177 - Bearer token from application not forwarded by OAuth-proxy
You cannot use an application as a custom workbench image if its internal authentication mechanism is based on a bearer token. The OAuth-proxy configuration removes the bearer token from the headers, and the application cannot work properly.
- Workaround
- None.
KUBEFLOW-157 - Logging out of JupyterLab does not work if you are already logged out of the OpenShift AI dashboard
If you log out of the OpenShift AI dashboard before you log out of JupyterLab, then logging out of JupyterLab is not successful. For example, if you know the URL for a Jupyter notebook, you are able to open this again in your browser.
- Workaround
- Log out of JupyterLab before you log out of the OpenShift AI dashboard.
RHODS-7718 - User without dashboard permissions is able to continue using their running workbenches indefinitely
When a Red Hat OpenShift AI administrator revokes a user’s permissions, the user can continue to use their running workbenches indefinitely.
- Workaround
- When the OpenShift AI administrator revokes a user’s permissions, the administrator should also stop any running workbenches for that user.
RHOAIENG-1152 (previously documented as RHODS-6356) - The basic-workbench creation process fails for users who have never logged in to the dashboard
The dashboard’s Administration page for basic workbenches displays users who belong to the user group and admin group in OpenShift. However, if an administrator attempts to start a basic workbench on behalf of a user who has never logged in to the dashboard, the basic-workbench creation process fails and displays the following error message:
Request invalid against a username that does not exist.
Request invalid against a username that does not exist.
- Workaround
- Request that the relevant user logs into the dashboard.
RHODS-5543 - When using the NVIDIA GPU Operator, more nodes than needed are created by the Node Autoscaler
When a pod cannot be scheduled due to insufficient available resources, the Node Autoscaler creates a new node. There is a delay until the newly created node receives the relevant GPU workload. Consequently, the pod cannot be scheduled and the Node Autoscaler’s continuously creates additional new nodes until one of the nodes is ready to receive the GPU workload. For more information about this issue, see the Red Hat Knowledgebase solution When using the NVIDIA GPU Operator, more nodes than needed are created by the Node Autoscaler.
- Workaround
-
Apply the
cluster-api/accelerator
label inmachineset.spec.template.spec.metadata
. This causes the autoscaler to consider those nodes as unready until the GPU driver has been deployed.
RHODS-4799 - Tensorboard requires manual steps to view
When a user has TensorFlow or PyTorch workbench images and wants to use TensorBoard to display data, manual steps are necessary to include environment variables in the workbench environment, and to import those variables for use in your code.
- Workaround
When you start your basic workbench, use the following code to set the value for the TENSORBOARD_PROXY_URL environment variable to use your OpenShift AI user ID.
import os os.environ["TENSORBOARD_PROXY_URL"]= os.environ["NB_PREFIX"]+"/proxy/6006/"
import os os.environ["TENSORBOARD_PROXY_URL"]= os.environ["NB_PREFIX"]+"/proxy/6006/"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
RHODS-4718 - The Intel® oneAPI AI Analytics Toolkits quick start references nonexistent sample notebooks
The Intel® oneAPI AI Analytics Toolkits quick start, located on the Resources page on the dashboard, requires the user to load sample notebooks as part of the instruction steps, but refers to notebooks that do not exist in the associated repository.
- Workaround
- None.
RHODS-3984 - Incorrect package versions displayed during notebook selection
In the OpenShift AI interface, the Start a notebook server page displays incorrect version numbers for the JupyterLab and Notebook packages included in the oneAPI AI Analytics Toolkit notebook image. The page might also show an incorrect value for the Python version used by this image.
- Workaround
-
When you start your oneAPI AI Analytics Toolkit notebook server, you can check which Python packages are installed on your notebook server and which version of the package you have by running the
!pip list
command in a notebook cell.
RHOAING-1147 (previously documented as RHODS-2881) - Actions on dashboard not clearly visible
The dashboard actions to revalidate a disabled application license and to remove a disabled application tile are not clearly visible to the user. These actions appear when the user clicks on the application tile’s Disabled
label. As a result, the intended workflows might not be clear to the user.
- Workaround
- None.
RHODS-2096 - IBM Watson Studio not available in OpenShift AI
IBM Watson Studio is not available when OpenShift AI is installed on OpenShift Dedicated 4.9 or higher, because it is not compatible with these versions of OpenShift Dedicated.
- Workaround
- Contact Marketplace support for assistance manually configuring Watson Studio on OpenShift Dedicated 4.9 and higher.
Chapter 8. Product features Copy linkLink copied to clipboard!
Red Hat OpenShift AI provides a rich set of features for data scientists and cluster administrators. To learn more, see Introduction to Red Hat OpenShift AI.