Release notes
Features, enhancements, resolved issues, and known issues associated with this release
Abstract
Chapter 1. Upgrade to OpenShift AI 3.0 not supported Copy linkLink copied to clipboard!
You cannot upgrade from OpenShift AI 2.25 or any earlier version to 3.0. OpenShift AI 3.0 introduces significant technology and component changes and is intended for new installations only. To use OpenShift AI 3.0, install the Red Hat OpenShift AI Operator on a cluster running OpenShift Container Platform 4.19 or later and select the fast-3.x channel.
Support for upgrades will be available in a later release, including upgrades from OpenShift AI 2.25 to a stable 3.x version.
For more information, see the Why upgrades to OpenShift AI 3.0 are not supported Knowledgebase article.
Chapter 2. Overview of OpenShift AI Copy linkLink copied to clipboard!
Red Hat OpenShift AI is a platform for data scientists and developers of artificial intelligence and machine learning (AI/ML) applications.
OpenShift AI provides an environment to develop, train, serve, test, and monitor AI/ML models and applications on-premise or in the cloud.
For data scientists, OpenShift AI includes Jupyter and a collection of default workbench images optimized with the tools and libraries required for model development, and the TensorFlow and PyTorch frameworks. Deploy and host your models, integrate models into external applications, and export models to host them in any hybrid cloud environment. You can enhance your projects on OpenShift AI by building portable machine learning (ML) workflows with AI pipelines by using Docker containers. You can also accelerate your data science experiments through the use of graphics processing units (GPUs) and Intel Gaudi AI accelerators.
For administrators, OpenShift AI enables data science workloads in an existing Red Hat OpenShift or ROSA environment. Manage users with your existing OpenShift identity provider, and manage the resources available to workbenches to ensure data scientists have what they require to create, train, and host models. Use accelerators to reduce costs and allow your data scientists to enhance the performance of their end-to-end data science workflows using graphics processing units (GPUs) and Intel Gaudi AI accelerators.
OpenShift AI has two deployment options:
- Self-managed software that you can install on-premise or in the cloud. You can install OpenShift AI Self-Managed in a self-managed environment such as OpenShift Container Platform, or in Red Hat-managed cloud environments such as Red Hat OpenShift Dedicated (with a Customer Cloud Subscription for AWS or GCP), Red Hat OpenShift Service on Amazon Web Services (ROSA classic or ROSA HCP), or Microsoft Azure Red Hat OpenShift.
A managed cloud service, installed as an add-on in Red Hat OpenShift Dedicated (with a Customer Cloud Subscription for AWS or GCP) or in Red Hat OpenShift Service on Amazon Web Services (ROSA classic).
For information about OpenShift AI Cloud Service, see Product Documentation for Red Hat OpenShift AI.
For information about OpenShift AI supported software platforms, components, and dependencies, see the Supported Configurations for 3.x Knowledgebase article.
For a detailed view of the 3.2 release lifecycle, including the full support phase window, see the Red Hat OpenShift AI Self-Managed Life Cycle Knowledgebase article.
Chapter 3. New features and enhancements Copy linkLink copied to clipboard!
This section describes new features and enhancements in Red Hat OpenShift AI 3.2.
3.1. Enhancements Copy linkLink copied to clipboard!
- PostgreSQL mandated as the production persistence layer for Llama Stack
PostgreSQL is now the only supported database for production Llama Stack deployments in OpenShift AI. The default configuration in
run.yamluses PostgreSQL for both core persistence and the RAG file provider.This enhancement ensures that production deployments use a scalable, production-ready persistence layer that meets enterprise performance and scalability requirements. SQLite remains available for local development and testing scenarios where appropriate.
- Enhanced TLS security for llm-d components
-
This release delivers a foundational security enhancement by enforcing strict TLS validation for all internal llm-d component communication. All
insecure skip TLS verifysettings have been removed from the llm-d stack. All internal services, including the Gateway, Scheduler, and vLLM backends, now use TLS certificates automatically signed by the OpenShift Service CA. Clients are configured to trust this CA, ensuring all connections are fully encrypted and validated, which prevents man-in-the-middle (MITM) attacks and enforces a zero-trust security posture.
- Model deployment wizard available from the model catalog
The OpenShift AI user interface for configuring and deploying large language models from the model catalog has been updated to use a new deployment wizard.
This streamlined interface simplifies common deployment scenarios by providing essential configuration options with sensible defaults when deploying models from the model catalog. The deployment wizard reduces setup complexity and helps users to deploy models from the model catalog more efficiently.
- Model deployment wizard available from a model registry
The OpenShift AI user interface for configuring and deploying large language models from a model registry has been updated to use a new deployment wizard.
This streamlined interface simplifies common deployment scenarios by providing essential configuration options with sensible defaults when deploying models from a model registry. The deployment wizard reduces setup complexity and helps users to deploy models from a model registry more efficiently.
3.2. New features Copy linkLink copied to clipboard!
- Support added to run Red Hat OpenShift AI on OpenShift Kubernetes Engine (OKE)
You can now install and run Red Hat OpenShift AI on OpenShift Kubernetes Engine (OKE). Red Hat provides a specific licensing exception for OpenShift AI users, making it easier to use OpenShift AI. With this feature, the dependent Operators required by Red Hat OpenShift AI can be installed on OKE.
NoteThis exception exclusively applies to Operators used to support Red Hat OpenShift AI AI workloads. Installing or using these Operators for purposes unrelated to Red Hat OpenShift AI is a violation of the OKE service agreement.
To learn more about OKE, see About OpenShift Kubernetes Engine.
- Deployment strategy selection for model serving
You can now configure the deployment strategy for model deployments from the OpenShift AI dashboard. You can choose between Rolling update and Recreate strategies.
- Rolling update (Default): Maintains availability by gradually replacing old pods with new ones.
- Recreate: Terminates the existing pod before it starts the new pod. This strategy is critical for managing large language models (LLMs) that consume significant GPU resources, because it prevents the resource contention that occurs when two instances run simultaneously during an update.
- New chat functionality in Generative AI Studio
- You can now clear the chat history and start a new conversation in the Playground by clicking the New Chat button. The chat interface clears the chat history while preserving your Playground configuration settings.
- Enhanced filtering for serving runtime selection
Red Hat OpenShift AI now includes improved filtering and distinct recommendations for selecting a serving runtime. You can choose how the serving runtime is determined by using the following options:
Auto-select the best runtime for your model based on model type, model format, and hardware profile: This option automatically selects a serving runtime if there is exactly one match. It also includes hardware profile matching based on the accelerator. For example, if you have a hardware profile with the NVIDIA GPU accelerator, the system suggests the vLLM NVIDIA GPU ServingRuntime for KServe runtime.
NoteIf a cluster administrator enables the Use distributed inference with llm-d by default when deploying generative models option as the preferred default for generative models in the administrator settings, the system suggests the Distributed inference with llm-d runtime.
- Select from a list of serving runtimes, including custom ones: This option displays all global and project-scoped serving runtime templates available to you.
- Feature Store integration with workbenches
Feature Store now completely integrates with data science projects and workbenches. Capabilities such as centrally managed role-based access control (RBAC), and feature lifecycle and lineage visibility, are now production-ready and fully supported. You can use Feature Store to standardize feature reuse and governance across projects. This allows data scientists to work within workbenches, while platform teams maintain centralized control, security, and scalability.
Feature Store now supports AI computing frameworks, Ray and Apache Spark. These tools enable scalable, distributed feature engineering for machine learning (ML) and generative AI workloads.
Chapter 4. Technology Preview features Copy linkLink copied to clipboard!
This section describes Technology Preview features in Red Hat OpenShift AI 3.2. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
- pgvector support as a remote vector store provider in Llama Stack
Starting with OpenShift AI 3.2, you can use PostgreSQL with the pgvector extension as a remote vector store provider for the Llama Stack
vector_storeendpoint as a Technology Preview feature.This enhancement enables vector storage backed by PostgreSQL, providing durable and transactional persistence for vector embeddings. For more information, see Llama Stack API provider support and Deploying a PostgreSQL instance with pgvector.
- Llama Stack versions in OpenShift AI 3.2
- OpenShift AI 3.2.0 uses the Open Data Hub Llama Stack version 0.3.5+rhai0 in the Llama Stack Distribution, which is based on the upstream Llama Stack version 0.3.5.
- Llama Stack servers now require installation of the PostgreSQL Operator
- In OpenShift AI 3.2, the PostgreSQL Operator is now required to deploy a Llama Stack server. For more information, see the Deploying a Llama Stack server documentation.
- Enabling high availability on Llama Stack
-
Llama Stack servers can be configured to remain operational in the event of a single point of failure as a Technology Preview feature. You can enable PostgreSQL high-availability settings in your
LlamaStackDistributioncustom resource. For more information, see the Enabling high availability on Llama Stack (Optional) documentation.
- Custom embeddings on Llama Stack
OpenShift AI 3.2 allows you to customize your embedding models as a Technology Preview feature. In the version of Llama Stack shipped in OpenShift AI 3.2, vLLM controls embeddings by default. You can update the
VLLM_EMBEDDING_URLenvironment variable in yourLlamaStackDistributioncustom resource to enable embeddings, or you can use custom embeddings providers. For example:- name: ENABLE_SENTENCE_TRANSFORMERS value: "true" - name: EMBEDDING_PROVIDER value: "sentence-transformers"- name: ENABLE_SENTENCE_TRANSFORMERS value: "true" - name: EMBEDDING_PROVIDER value: "sentence-transformers"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- NVIDIA NeMo Guardrails
- You can use NVIDIA NeMo Guardrails as a Technology Preview feature to add guardrails and safety controls to your deployed models in Red Hat OpenShift AI. NeMo Guardrails provides a framework for controlling conversations with large language models, enabling you to define a variety of rails, such as sensitive data detection, content filtering, or custom validation rules.
- Stop button for chatbot in Generative AI Studio
- You can interrupt the chatbot as it is composing a response to a prompt. In the Playground, after you send a prompt, the Send button in the chat input field changes to a Stop button. Click it if you want to interrupt the model’s response, for example, when the response takes longer than you anticipated or if you notice that you made an error in your prompt. The chatbot posts "You stopped this message" to confirm your stop request.
- Kubeflow Trainer v2
Kubeflow Trainer v2 is now available as a Technology Preview feature in OpenShift AI 3.2.
Kubeflow Trainer v2 is the next generation of distributed training for OpenShift AI, replacing the Kubeflow Training Operator v1 (KFTOv1). This Kubernetes-native solution simplifies how data scientists and ML engineers run PyTorch training workloads at scale using a unified TrainJob API and Python SDK.
This Technology Preview release introduces the following capabilities:
- Simplified job definitions using TrainJob and TrainingRuntime resources
- Python SDK for programmatic job creation and management
- A new web-based user interface for inspecting and interacting with training jobs
- Real-time progress tracking with visibility into training steps, epochs, and metrics
- Smart checkpoint management with automatic preservation during pod preemption or termination
- Pausing and resuming train jobs
Resource-aware scheduling via native integration with Red Hat build of Kueue
Users of the deprecated Kubeflow Training Operator v1 (KFTOv1) should migrate their workloads to Kubeflow Trainer v2 before KFTOv1 is removed. For guidance and more details, see the migration guide.
For more information about Kubeflow Trainer v2 features and usage, see the Kubeflow Trainer v2 documentation.
Chapter 5. Developer Preview features Copy linkLink copied to clipboard!
This section describes Developer Preview features in Red Hat OpenShift AI 3.2. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to functionality in advance of possible inclusion in a Red Hat product offering. Customers can use these features to test functionality and provide feedback during the development process. Developer Preview features might not have any documentation, are subject to change or removal at any time, and have received limited testing. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
For more information about the support scope of Red Hat Developer Preview features, see Developer Preview Support Scope.
- MLflow integration
- OpenShift AI now includes a Developer Preview of MLflow. MLflow uses Kubernetes namespaces (OpenShift projects) as workspaces to provide logical isolation of experiments, registered models, and prompts. MLflow uses Kubernetes role-based access control (RBAC) to authorize API requests. For more information about enabling and using MLflow in OpenShift AI, see the Configuring MLflow in OpenShift AI (Developer Preview) Knowledgebase article.
- Model-as-a-Service (MaaS) integration
This feature is available as a Developer Preview.
OpenShift AI now includes Model-as-a-Service (MaaS) to address resource consumption and governance challenges associated with serving large language models (LLMs).
MaaS provides centralized control over model access and resource usage by exposing models through managed API endpoints, allowing administrators to enforce consumption policies across teams.
This Developer Preview introduces the following capabilities:
- Policy and quota management
- Authentication and authorization
- Usage tracking
User management
For more information, see Introducing Models-as-a-Service in OpenShift AI.
- AI Available Assets integration with Model-as-a-Service (MaaS)
This feature is available as a Developer Preview.
You can now access and consume Model-as-a-Service (MaaS) models directly from the AI Available Assets page in the GenAI Studio.
Administrators can configure a MaaS by enabling the toggle in the Model Deployments page. When a model is marked as a service, it becomes global and visible across all projects in the cluster.
- Additional fields added to Model Deployments for AI Available Assets integration
This feature is available as a Developer Preview.
Administrators can now add metadata to models during deployment so that they are automatically listed on the AI Available Assets page.
The following table describes the new metadata fields that streamline the process of making models discoverable and consumable by other teams:
| Field name | Field type | Description |
|---|---|---|
| Use Case | Free-form text | Describes the model’s primary purpose, for example, "Customer Churn Prediction" or "Image Classification for Product Catalog." |
| Description | Free-form text | Provides more detailed context and functionality notes for the model. |
| Add to AI Assets | Checkbox | When enabled, automatically publishes the model and its metadata to the AI Available Assets page. |
- Compatibility of Llama Stack remote providers and SDK with MCP HTTP streaming protocol
This feature is available as a Developer Preview.
Llama Stack remote providers and the SDK are now compatible with the Model Context Protocol (MCP) HTTP streaming protocol.
This enhancement enables developers to build fully stateless MCP servers, simplify deployment on standard Llama Stack infrastructure (including serverless environments), and improve scalability. It also prepares for future enhancements such as connection resumption and provides a smooth transition away from Server-Sent Events (SSE).
- Packaging of ITS Hub dependencies to the Red Hat–maintained Python index
This feature is available as a Developer Preview.
All Inference Time Scaling (ITS) runtime dependencies are now packaged in the Red Hat-maintained Python index, allowing Red Hat AI and OpenShift AI customers to install
its_huband its dependencies directly by usingpip.This enhancement enables users to build custom inference images with ITS algorithms focused on improving model accuracy at inference time without requiring model retraining, such as:
- Particle filtering
- Best-of-N
- Beam search
- Self-consistency
Verifier or PRM-guided search
For more information, see the ITS Hub on GitHub.
- Dynamic hardware-aware continual training strategy
Static hardware profile support is now available to help users select training methods, models, and hyperparameters based on VRAM requirements and reference benchmarks. This approach ensures predictable and reliable training workflows without dynamic hardware discovery.
The following components are included:
- API Memory Estimator: Accepts model, training method, dataset metadata, and assumed hyperparameters as input and returns an estimated VRAM requirement for the training job. Delivered as an API within Training Hub.
- Reference Profiles and Benchmarks: Provides end-to-end training time benchmarks for OpenShift AI Innovation (OSFT) and Performance Team (LAB SFT) baselines, delivered as static tables and documentation in Training Hub.
Hyperparameter Guidance: Publishes safe starting ranges for key hyperparameters such as learning rate, batch size, epochs, and LoRA rank. Integrated into example notebooks maintained by the AI Innovation team.
ImportantHardware discovery is not included in this release. Only static reference tables and guidance are provided; automated GPU or CPU detection is not yet supported.
- Human-in-the-Loop (HIL) functionality in the Llama Stack agent
Human-in-the-Loop (HIL) functionality has been added to the Llama Stack agent to allow users to approve unread tool calls before execution.
This enhancement includes the following capabilities:
- Users can approve or reject unread tool calls through the responses API.
- Configuration options specify which tool calls require HIL approval.
- Tool calls pause until user approval is received for HIL-enabled tools.
- Tool calls that do not require HIL continue to run without interruption.
Chapter 6. Support removals Copy linkLink copied to clipboard!
This section describes major changes in support for user-facing features in Red Hat OpenShift AI. For information about OpenShift AI supported software platforms, components, and dependencies, see the Supported Configurations for 3.x Knowledgebase article.
6.1. Deprecated Copy linkLink copied to clipboard!
6.1.1. Deprecated SQLite as a production metadata store for Llama Stack Copy linkLink copied to clipboard!
Starting with OpenShift AI 3.2, SQLite is deprecated for use as a metadata store in production Llama Stack deployments. PostgreSQL is required for production-grade environments to ensure adequate performance, concurrency, and scalability. SQLite remains available for local development and testing only and must be explicitly configured. This includes configurations that define SQLite backends such as kv-sqlite or sql-sqlite in the Llama Stack storage configuration. SQLite is not intended for production workloads.
6.1.2. Deprecated annotation format for Connection Secrets:: Copy linkLink copied to clipboard!
Starting with OpenShift AI 3.0, the opendatahub.io/connection-type-ref annotation format for creating Connection Secrets is deprecated.
+ For all new Connection Secrets, use the opendatahub.io/connection-type-protocol annotation instead. While both formats are currently supported, connection-type-protocol takes precedence and should be used for future compatibility.
6.1.3. Deprecated Kubeflow Training operator v1 Copy linkLink copied to clipboard!
The Kubeflow Training Operator (v1) is deprecated starting OpenShift AI 2.25 and is planned to be removed in a future release. This deprecation is part of our transition to Kubeflow Trainer v2, which delivers enhanced capabilities and improved functionality.
6.1.4. Deprecated TrustyAI service CRD v1alpha1 Copy linkLink copied to clipboard!
Starting with OpenShift AI 2.25, the v1apha1 version is deprecated and planned for removal in an upcoming release. You must update the TrustyAI Operator to version v1 to receive future Operator updates.
6.1.5. Deprecated KServe Serverless deployment mode Copy linkLink copied to clipboard!
Starting with OpenShift AI 2.25, The KServe Serverless deployment mode is deprecated. You can continue to deploy models by migrating to the KServe RawDeployment mode. If you are upgrading to Red Hat OpenShift AI 3.0, all workloads that use the retired Serverless or ModelMesh modes must be migrated before upgrading.
6.1.6. Deprecated model registry API v1alpha1 Copy linkLink copied to clipboard!
Starting with OpenShift AI 2.24, the model registry API version v1alpha1 is deprecated and will be removed in a future release of OpenShift AI. The latest model registry API version is v1beta1.
6.1.7. Multi-model serving platform (ModelMesh) Copy linkLink copied to clipboard!
Starting with OpenShift AI version 2.19, the multi-model serving platform based on ModelMesh is deprecated. You can continue to deploy models on the multi-model serving platform, but it is recommended that you migrate to the single-model serving platform.
For more information or for help on using the single-model serving platform, contact your account manager.
6.1.8. Accelerator Profiles and legacy Container Size selector deprecated Copy linkLink copied to clipboard!
Starting with OpenShift AI 3.0, Accelerator Profiles and the Container Size selector for workbenches are deprecated.
+ These features are replaced by the more flexible and unified Hardware Profiles capability.
6.1.9. Deprecated OpenVINO Model Server (OVMS) plugin Copy linkLink copied to clipboard!
The CUDA plugin for the OpenVINO Model Server (OVMS) is now deprecated and will no longer be available in future releases of OpenShift AI.
6.1.10. OpenShift AI dashboard user management moved from OdhDashboardConfig to Auth resource Copy linkLink copied to clipboard!
Previously, cluster administrators used the groupsConfig option in the OdhDashboardConfig resource to manage the OpenShift groups (both administrators and non-administrators) that can access the OpenShift AI dashboard. Starting with OpenShift AI 2.17, this functionality has moved to the Auth resource. If you have workflows (such as GitOps workflows) that interact with OdhDashboardConfig, you must update them to reference the Auth resource instead.
| Resource | 2.16 and earlier | 2.17 and later versions |
|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
| Admin groups |
|
|
| User groups |
|
|
6.1.11. Deprecated cluster configuration parameters Copy linkLink copied to clipboard!
When using the CodeFlare SDK to run distributed workloads in Red Hat OpenShift AI, the following parameters in the Ray cluster configuration are now deprecated and should be replaced with the new parameters as indicated.
| Deprecated parameter | Replaced by |
|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
You can also use the new extended_resource_mapping and overwrite_default_resource_mapping parameters, as appropriate. For more information about these new parameters, see the CodeFlare SDK documentation (external).
6.2. Removed functionality Copy linkLink copied to clipboard!
- Caikit-NLP component removed
The
caikit-nlpcomponent has been formally deprecated and removed from OpenShift AI 3.0.This runtime is no longer included or supported in OpenShift AI. Users should migrate any dependent workloads to supported model serving runtimes.
- TGIS component removed
The TGIS component, which was deprecated in OpenShift AI 2.19, has been removed in OpenShift AI 3.0.
TGIS continued to be supported through the OpenShift AI 2.16 Extended Update Support (EUS) lifecycle, which ended in June 2025.
Starting with this release, TGIS is no longer available or supported. Users should migrate their model serving workloads to supported runtimes such as Caikit or Caikit-TGIS.
- AppWrapper Controller removed
The AppWrapper controller has been removed from OpenShift AI as part of the broader CodeFlare Operator removal process.
This change eliminates redundant functionality and reduces maintenance overhead and architectural complexity.
6.2.1. CodeFlare Operator removed Copy linkLink copied to clipboard!
Starting with OpenShift AI 3.0, the CodeFlare Operator has been removed.
+ The functionality previously provided by the CodeFlare Operator is now included in the KubeRay Operator, which provides equivalent capabilities such as mTLS, network isolation, and authentication.
- LAB-tuning feature removed
Starting with OpenShift AI 3.0, the LAB-tuning feature has been removed.
Users who previously relied on LAB-tuning for large language model customization should migrate to alternative fine-tuning or model customization methods.
- Embedded Kueue component removed
The embedded Kueue component, which was deprecated in OpenShift AI 2.24, has been removed in OpenShift AI 3.0.
OpenShift AI now uses the Red Hat Build of the Kueue Operator to provide enhanced workload scheduling across distributed training, workbench, and model serving workloads.
The embedded Kueue component is not supported in any Extended Update Support (EUS) release.
- Removal of DataSciencePipelinesApplication v1alpha1 API version
The
v1alpha1API version of theDataSciencePipelinesApplicationcustom resource (datasciencepipelinesapplications.opendatahub.io/v1alpha1) has been removed.OpenShift AI now uses the stable
v1API version (datasciencepipelinesapplications.opendatahub.io/v1).You must update any existing manifests or automation to reference the
v1API version to ensure compatibility with OpenShift AI 3.0 and later.
6.2.2. Microsoft SQL Server command-line tool removal Copy linkLink copied to clipboard!
Starting with OpenShift AI 2.24, the Microsoft SQL Server command-line tools (sqlcmd, bcp) have been removed from workbenches. You can no longer manage Microsoft SQL Server using the preinstalled command-line client.
6.2.3. Model registry ML Metadata (MLMD) server removal Copy linkLink copied to clipboard!
Starting with OpenShift AI 2.23, the ML Metadata (MLMD) server has been removed from the model registry component. The model registry now interacts directly with the underlying database by using the existing model registry API and database schema. This change simplifies the overall architecture and ensures the long-term maintainability and efficiency of the model registry by transitioning from the ml-metadata component to direct database access within the model registry itself.
If you see the following error for your model registry deployment, this means that your database schema migration has failed:
error: error connecting to datastore: Dirty database version {version}. Fix and force version.
error: error connecting to datastore: Dirty database version {version}. Fix and force version.
You can fix this issue by manually changing the database from a dirty state to 0 before traffic can be routed to the pod. Perform the following steps:
Find the name of your model registry database pod as follows:
kubectl get pods -n <your-namespace> | grep model-registry-dbReplace
<your-namespace>with the namespace where your model registry is deployed.Use
kubectl execto run the query on the model registry database pod as follows:kubectl exec -n <your-namespace> <your-db-pod-name> -c mysql -- mysql -u root -p"$MYSQL_ROOT_PASSWORD" -e "USE <your-db-name>; UPDATE schema_migrations SET dirty = 0;"Replace
<your-namespace>with your model registry namespace and<your-db-pod-name>with the pod name that you found in the previous step. Replace<your-db-name>with your model registry database name.This will reset the dirty state in the database, allowing the model registry to start correctly.
6.2.4. Embedded subscription channel not used in some versions Copy linkLink copied to clipboard!
For OpenShift AI 2.8 to 2.20 and 2.22 to 3.2, the embedded subscription channel is not used. You cannot select the embedded channel for a new installation of the Operator for those versions. For more information about subscription channels, see Installing the Red Hat OpenShift AI Operator.
6.2.5. Anaconda removal Copy linkLink copied to clipboard!
Anaconda is an open source distribution of the Python and R programming languages. Starting with OpenShift AI version 2.18, Anaconda is no longer included in OpenShift AI, and Anaconda resources are no longer supported or managed by OpenShift AI.
If you previously installed Anaconda from OpenShift AI, a cluster administrator must complete the following steps from the OpenShift command-line interface to remove the Anaconda-related artifacts:
Remove the secret that contains your Anaconda password:
oc delete secret -n redhat-ods-applications anaconda-ce-accessRemove the
ConfigMapfor the Anaconda validation cronjob:oc delete configmap -n redhat-ods-applications anaconda-ce-validation-resultRemove the Anaconda image stream:
oc delete imagestream -n redhat-ods-applications s2i-minimal-notebook-anacondaRemove the Anaconda job that validated the downloading of images:
oc delete job -n redhat-ods-applications anaconda-ce-periodic-validator-job-custom-runRemove any pods related to Anaconda cronjob runs:
oc get pods n redhat-ods-applications --no-headers=true | awk '/anaconda-ce-periodic-validator-job-custom-run*/'
6.2.6. Pipeline logs for Python scripts running in Elyra pipelines are no longer stored in S3 Copy linkLink copied to clipboard!
Logs are no longer stored in S3-compatible storage for Python scripts which are running in Elyra pipelines. From OpenShift AI version 2.11, you can view these logs in the pipeline log viewer in the OpenShift AI dashboard.
For this change to take effect, you must use the Elyra runtime images provided in workbench images at version 2024.1 or later.
If you have an older workbench image version, update the Version selection field to a compatible workbench image version, for example, 2024.1, as described in Updating a project workbench.
Updating your workbench image version will clear any existing runtime image selections for your pipeline. After you have updated your workbench version, open your workbench IDE and update the properties of your pipeline to select a runtime image.
6.2.7. Beta subscription channel no longer used Copy linkLink copied to clipboard!
Starting with OpenShift AI 2.5, the beta subscription channel is no longer used. You can no longer select the beta channel for a new installation of the Operator. For more information about subscription channels, see Installing the Red Hat OpenShift AI Operator.
6.2.8. HabanaAI workbench image removal Copy linkLink copied to clipboard!
Support for the HabanaAI 1.10 workbench image has been removed. New installations of OpenShift AI from version 2.14 do not include the HabanaAI workbench image. However, if you upgrade OpenShift AI from a previous version, the HabanaAI workbench image remains available, and existing HabanaAI workbench images continue to function.
Chapter 7. Resolved issues Copy linkLink copied to clipboard!
The following notable issues are resolved in Red Hat OpenShift AI 3.2. Security updates, bug fixes, and enhancements for Red Hat OpenShift AI 3.2 are released as asynchronous errata. All OpenShift AI errata advisories are published on the Red Hat Customer Portal.
7.1. Issues resolved in Red Hat OpenShift AI 3.2 Copy linkLink copied to clipboard!
RHOAIENG-31071 - LM-Eval evaluations using Parquet datasets fail on IBM Z (s390x)
Before this update, Apache Arrow’s Parquet implementation contained endianness-specific code that was incompatible with big-endian IBM Z (s390x) architecture, causing byte-order mismatches when reading Parquet-formatted datasets. This resulted in LM-Eval evaluation tasks using datasets in Parquet format failing on s390x systems with parsing errors. A workaround applied compatibility patches to Apache Arrow and built a custom version specifically for s390x to support proper Parquet encoding/decoding.
RHOAIENG-38579 - Cannot stop models served with the Distributed Inference Server runtime
Before this update, you could not stop models served with the Distributed Inference Server with llm-d runtime from the OpenShift AI dashboard. This issue has been resolved.
RHOAIENG-38180 - Unable to send requests to Feature Store using the Feast SDK from workbench
Before this update, Feast was missing certificates and a service when running the default configuration, which prevented you from sending requests to your Feature Store by using the Feast SDK.
This issue has been resolved.
RHOAIENG-41588 - Standard openshift-container-platform route support added for dashboard access
Before this update, the transition to using Gateway API for Red Hat OpenShift AI version 3.0 required load balancer configuration. This configuration requirement caused usability issues and led to deployment delays for users of baremetal and cloud infrastructures. This issue has been resolved. The Gateway API now supports Cluster IP mode and standard openshift-container-platform route configuration in addition to the load balancer configuration option, simplifying dashboard access for the users.
For more information, see Configurable Ingress Mode for RHOAI 3.2 on Bare Metal, OpenStack and Private Clouds.
RHOAIENG-44616 - Inferencing with granite-3b model fails on IBM Power
Before this update, inference services for the granite-3b-code-instruct-2k model were created successfully. However, when a chat completion request was sent, it failed with an Internal server error. This issue is now resolved.
RHOAIENG-37686 - Metrics not displayed on the Dashboard due to image name mismatch in runtime detection logic
Previously, metrics were not displayed on the OpenShift AI dashboard because digest-based image names were not correctly recognized by the runtime detection system. This issue affected all InferenceService deployments in OpenShift AI 2.25 and later. This issue has been resolved.
RHOAIENG-37492 - Dashboard console link not accessible on IBM Power in 3.0.0
Previously, on private cloud deployments running on IBM Power, the OpenShift AI dashboard link was not visible in the OpenShift console when the dashboard was enabled in the DataScienceCluster configuration. As a result, users could not access the dashboard through the console without manually creating a route. This issue has been resolved.
RHOAIENG-1152 - Basic workbench creation process fails for users who have never logged in to the dashboard
This issue is now obsolete as of OpenShift AI 3.0. The basic workbench creation process has been updated, and this behavior no longer occurs.
Chapter 8. Known issues Copy linkLink copied to clipboard!
This section describes known issues in Red Hat OpenShift AI 3.2 and any known methods of working around these issues.
RHOAIENG-24545 - Runtime images are not present in workbench after first start
The list of runtime images does not properly populate the first running workbench instance in the namespace. As a result, no image is shown for selection in the Elyra pipeline editor.
- Workaround
- Restart the workbench. After restarting the workbench, the list of runtime images populates both the workbench and the select box for the Elyra pipeline editor.
RHOAIENG-36757 - "Existing cluster storage" option is missing during model deployment when no connections exist
When you create a model deployment in a project with no data connections defined, the Existing cluster storage option is not displayed, even if Persistent Volume Claims (PVCs) suitable for model storage exist in the project. This prevents you from selecting an existing PVC for the deployment.
- Workaround
- Create at least one connection of type URI in the project to display the Existing cluster storage option.
RHOAIENG-37743 - Progress tab does not display steps when starting a workbench
When you start a workbench, the Progress tab on the Workbench Status screen does not display the status of individual steps. Instead, a warning is issued that some steps might be repeated or occur in any order.
- Workaround
- To view the detailed status, check the Event Log tab, or connect to the OpenShift web console and view the pod details.
AIPCC-8019 - Jemalloc consumes more memory than glibc when deploying models on IBM Z systems
When a model is deployed using jemalloc as the memory allocator, memory consumption is significantly higher compared to glibc. A comparison shows more than a fifty percent memory increase when jemalloc is used.
- Workaround
- In RHAIIS 3.2.5, unassign the LD_PRELOAD environment variable to disable jemalloc and use glibc as the memory allocator.
AIPCC-8043 - RHAIIS fails to start on IBM Z systems with FIPS enabled when using the Spyre platform plugin
RHAIIS fails to start on IBM Z systems with FIPS enabled when using the Spyre platform plugin, preventing model deployment. Logs show that a _hashlib.UnsupportedDigestmodError occurs during the model startup. RHAIIS 3.2.5 Spyre on IBM Z uses vLLM 0.11.0.
- Workaround
- The issue is fixed in vLLM 0.11.1 and is planned for future versions of RHAIIS.
RHOAIENG-44557 - Offline Feast container fails to start when applying the default FeatureStore YAML
When applying the default FeatureStore YAML for Feast, the offline-store container in the resulting pod fails to start. While the feature-server, registry-server, and ui containers start successfully, the offline-store container crashes because the current image lacks PyArrow built with flight support, a mandatory requirement for that specific container.
- Workaround
- None.
RHOAIENG-44610-MaaS component resource is visible in OpenShift Operator UI
The ModelsAsService resource displays incorrectly in the Operator resource list. This resource is an internal one and should not display there.
- Workaround
- None.
RHOAIENG-44931-Downstream MLflow image now has Postgres support
The developer preview MLflow container image does not have the psycopg2 Python package preinstalled for connecting to Postgres.
- Workaround
- Use SQLite.
RHOAIENG-44516 - Kubernetes tokens not supported by the Red Hat OpenShift AI Gateway
Red Hat OpenShift AI does not accept Kubernetes service accounts when you authenticate through the dashboard MLflow URL. Workaround:: To authenticate with a service account token, complete the following steps:
- Create an OpenShift Route directly to the MLflow service endpoints.
- Use the Route URL as the MLFLOW_TRACKING_URI when you authenticate.
RHOAIENG-46420 - Inference failures with vLLM when using the temperature parameter on IBM Z
On IBM Z platforms, there are inference failures with vLLM when the temperature parameter is explicitly set in inference requests. Any request that includes the temperature field results in an inference failure. When this occurs, the vLLM process terminates, causing the serving pod to enter a CrashLoopBackOff state. The pod restarts automatically, but if the temperature parameter remains, subsequent inference requests continue to fail. This issue does not occur when the temperature parameter is omitted from the request.
- Workaround
- None.
RHOAIENG-46944 - OpenShift route update fails when GatewayConfig subdomain changes
When updating the GatewayConfig subdomain field, the Gateway controller fails to update the corresponding OpenShift Route due to missing update verb in the Role-Based Access Control (RBAC) permissions. As a result, users cannot change the Gateway subdomain after initial deployment and see the following "permission denied" error:
Route.route.openshift.io "data-science-gateway" is invalid: spec.host: Invalid value: "test.apps.rosa.p2r4t4z7a2o1m8i.c8l6.p3.openshiftapps.com": you do not have permission to set the host field of the route
Route.route.openshift.io "data-science-gateway" is invalid: spec.host: Invalid value: "test.apps.rosa.p2r4t4z7a2o1m8i.c8l6.p3.openshiftapps.com": you do not have permission to set the host field of the route
- Workaround
-
Use the default subdomain provided during initial deployment. Avoid adding or changing the
GatewayConfigsubdomain field as post-deployment modifications have limited support.
RHOAIENG-37228 - Manual DNS configuration required on OpenStack and private cloud environments
When deploying OpenShift AI 3.2 on OpenStack, CodeReady Containers (CRC), or other private cloud environments without integrated external DNS, external access to components such as the dashboard and workbenches might fail after installation. This occurs because the dynamically provisioned LoadBalancer Service does not automatically register its IP address in external DNS.
- Workaround
- To restore access, manually create the required A or CNAME records in your external DNS system. For instructions, see the Configuring External DNS for RHOAI 3.x on OpenStack and Private Clouds Knowledgebase article.
RHOAIENG-38658 - TrustyAI service issues during model inference with token authentication on IBM Z (s390x)
On IBM Z (s390x) architecture, the TrustyAI service encounters errors during model inference when token authentication is enabled. A JsonParseException displays while logging to the TrustyAI service logger, causing the bias monitoring process to fail or behave unexpectedly.
- Workaround
- Run the TrustyAI service without authentication. The issue occurs only when token authentication is enabled.
RHOAIENG-38333 - Code generated by the Generative AI Playground is invalid and required packages are missing from workbenches
Some code automatically generated by the Generative AI Playground might cause syntax errors when run in OpenShift AI workbenches. Additionally, the LlamaStackClient package is not currently included in standard workbench images.
- Workaround
Install the missing package manually within your workbench environment before running the generated code:
pip install llamastack-client
$ pip install llamastack-clientCopy to Clipboard Copied! Toggle word wrap Toggle overflow
RHOAIENG-38263 - Intermittent failures with Guardrails Detector model on Hugging Face runtime for IBM Z
On IBM Z platforms, the Guardrails Detector model running on the Hugging Face runtime might intermittently fail to process identical requests. In some cases, a request that previously returned valid results fails with a parse error similar to the following example:
Invalid numeric literal at line 1, column 20
Invalid numeric literal at line 1, column 20
This error can cause the serving pod to temporarily enter a CrashLoopBackOff state, although it typically recovers automatically.
- Workaround
- None. The pod restarts automatically and resumes normal operation.
RHOAIENG-38253 - Distributed Inference Server with llm-d not listed on the Serving Runtimes page
While Distributed Inference Server with llm-d appears as an available option when deploying a model, it is not listed on the Serving Runtimes page under the Settings section.
This occurs because Distributed Inference Server with llm-d is a composite deployment type that includes additional components beyond a standard serving runtime. It therefore does not appear in the list of serving runtimes visible to administrators and cannot currently be hidden from end users.
- Workaround
- None. The Distributed Inference Server with llm-d option can still be used for model deployments, but it cannot be managed or viewed from the Serving Runtimes page.
RHOAIENG-38252 - Model Registry Operator does not work with BYOIDC mode on OpenShift 4.20
On OpenShift 4.20 clusters configured with Bring Your Own Identity Provider (BYOIDC) mode, deploying the Model Registry Operator fails.
When you create a ModelRegistry custom resource, it does not reach the available: True state. Instead, the resource shows a status similar to the following example:
- Workaround
- None.
You cannot create or deploy a Model Registry instance when using BYOIDC mode on OpenShift 4.20.
RHOAIENG-37916 - LLM-D deployed model shows failed status on the Deployments page
Models deployed using the {llm-d} initially display a Failed status on the Deployments page in the OpenShift AI dashboard, even though the associated pod logs report no errors or failures.
To confirm the status of the deployment, use the OpenShift console to monitor the pods in the project. When the model is ready, the OpenShift AI dashboard updates the status to Started.
- Workaround
- Wait for the model status to update automatically, or check the pod statuses in the OpenShift console to verify that the model has started successfully.
RHOAIENG-37882 - Custom workbench (AnythingLLM) fails to load
Deploying a custom workbench such as AnythingLLM 1.8.5 might fail to finish loading. Starting with OpenShift AI 3.0, all workbenches must be compatible with the Kubernetes Gateway API’s path-based routing. Custom workbench images that do not support this requirement fail to load correctly.
- Workaround
-
Update your custom workbench image to support path-based routing by serving all content from the
${NB_PREFIX}path (for example,/notebook/<namespace>/<workbench-name>). Requests to paths outside this prefix (such as/index.htmlor/api/data) are not routed to the workbench container.
To fix existing workbenches:
-
Update your application to handle requests at
${NB_PREFIX}/...paths. -
Configure the base path in your framework, for example:
FastAPI(root_path=os.getenv('NB_PREFIX', '')) - Update nginx to preserve the prefix in redirects.
-
Implement health endpoints returning HTTP 200 at:
${NB_PREFIX}/api,${NB_PREFIX}/api/kernels, and${NB_PREFIX}/api/terminals. -
Use relative URLs and remove any hardcoded absolute paths such as
/menu.
For more information, see the migration guide: Gateway API migration guide.
RHOAIENG-37855 - Model deployment from Model Catalog fails due to name length limit
When deploying certain models from the Model Catalog, the deployment might fail silently and remain in the Starting state. This issue occurs because KServe cannot create a deployment from the InferenceService when the resulting object name exceeds the 63-character limit.
- Example
-
Attempting to deploy the model
RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamicresults in KServe trying to create a deployment namedisvc.redhataimistral-small-31-24b-instruct-2503-fp8-dynamic-predictor, which has 69 characters and exceeds the maximum allowed length. - Workaround
-
Use shorter model names or rename the
InferenceServiceto ensure the generated object name stays within the 63-character limit.
RHOAIENG-37842 - Ray workloads requiring ray.init() cannot be triggered outsid OpenShift AI
Ray workloads that require ray.init() cannot be triggered outside the OpenShift AI environment. These workloads must be submitted from within a workbench or pipeline running on OpenShift AI in OpenShift. Running these workloads externally is not supported and results in initialization failures.
- Workaround
-
Run Ray workloads that call
ray.init()only within an OpenShift AI workbench or pipeline context.
RHOAIENG-37743 - No progress bar displayed when starting workbenches
When starting a workbench, the Progress tab in the Workbench Status screen does not display step-by-step progress. Instead, it shows a generic message stating that “Steps may repeat or occur in a different order.”
- Workaround
- To view detailed progress information, open the Event Log tab or use the OpenShift console to view the pod details associated with the workbench.
RHOAIENG-37667 - Model-as-a-Service (MaaS) available only for LLM-D runtime
Model-as-a-Service (MaaS) is currently supported only for models deployed with the Distributed Inference Server with llm-d runtime. Models deployed with the vLLM runtime cannot be served by MaaS at this time.
- Workaround
-
None. Use the
llm-druntime for deployments that require Model-as-a-Service functionality.
RHOAIENG-37561 - Dashboard console link fails to access OpenShift AI on IBM Z clusters in 3.0.0
When attempting to access the OpenShift AI 3.0.0 dashboard using the console link on an IBM Z cluster, the connection fails.
- Workaround
- Create a route to the Gateway link by applying the following YAML file:
RHOAIENG-37259 - Elyra Pipelines not supported on IBM Z (s390x)
Elyra Pipelines depend on Data Science Pipelines (DSP) for orchestration and validation. Because DSP is not currently available on IBM Z, Elyra pipeline-related functionality and tests are skipped.
- Workaround
- None. Elyra Pipelines will function correctly once DSP support is enabled and validated on IBM Z.
RHOAIENG-37015 - TensorBoard reporting fails in PyTorch 2.8 training image
When using TensorBoard reporting for training jobs that use the SFTTrainer with the image registry.redhat.io/rhoai/odh-training-cuda128-torch28-py312-rhel9:rhoai-3.0, or when the report_to parameter is omitted from the training configuration, the training job fails with a JSON serialization error.
- Workaround
-
Install the latest versions of the
transformersandtrlpackages and update thetorch_dtypeparameter todtypein the training configuration.
If you are using the Training Operator SDK, you can specify the packages to install by using the packages_to_install parameter in the create_job function:
packages_to_install=[
"transformers==4.57.1",
"trl==0.24.0"
]
packages_to_install=[
"transformers==4.57.1",
"trl==0.24.0"
]
RHOAIENG-36757 - Existing cluster storage option missing during model deployment when no connections exist
When creating a model deployment in a project that has no data connections defined, the Existing cluster storage option is not displayed, even if suitable Persistent Volume Claims (PVCs) exist in the project. This prevents you from selecting an existing PVC for model deployment.
- Workaround
- Create at least one connection of type URI in the project to make the Existing cluster storage option appear.
RHAIENG-1795 - CodeFlare with Ray does not work with Gateway
When running the following commands, the output indicates that the Ray cluster has been created and is running, but the cell never completes because the Gateway route does not respond correctly:
cluster.up() cluster.wait_ready()
cluster.up()
cluster.wait_ready()
As a result, subsequent operations such as fetching the Ray cluster or obtaining the job client fail, preventing job submission to the cluster.
- Workaround
- None. The Ray Dashboard Gateway route does not function correctly when created through CodeFlare.
RHAIENG-1796 - Pipeline name must be DNS compliant when using Kubernetes pipeline storage
When using Kubernetes as the storage backend for pipelines, Elyra does not automatically convert pipeline names to DNS-compliant values. If a non-DNS-compliant name is used when starting an Elyra pipeline, an error similar to the following appears:
[TIP: did you mean to set 'https://ds-pipeline-dspa-robert-tests.apps.test.rhoai.rh-aiservices-bu.com/pipeline' as the endpoint, take care not to include 's' at end]
[TIP: did you mean to set 'https://ds-pipeline-dspa-robert-tests.apps.test.rhoai.rh-aiservices-bu.com/pipeline' as the endpoint, take care not to include 's' at end]
- Workaround
- Use DNS-compliant names when creating or running Elyra pipelines.
RHAIENG-1139 - Cannot deploy LlamaStackDistribution with the same name in multiple namespaces
If you create two LlamaStackDistribution resources with the same name in different namespaces, the ReplicaSet for the second resource fails to start the Llama Stack pod. The Llama Stack Operator does not correctly assign security constraints when duplicate names are used across namespaces.
- Workaround
-
Use a unique name for each
LlamaStackDistributionin every namespace. For example, include the project name or add a suffix such asllama-stack-distribution-209342.
RHAIENG-1624 - Embeddings API timeout on disconnected clusters
On disconnected clusters, calls to the embeddings API might time out when using the default embedding model (ibm-granite/granite-embedding-125m-english) included in the default Llama Stack distribution image.
- Workaround
Add the following environment variables to the
LlamaStackDistributioncustom resource to use the embedded model offline:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
RHOAIENG-34923 - Runtime configuration missing when running a pipeline from JupyterLab
The runtime configuration might not appear in the Elyra pipeline editor when you run a pipeline from the first active workbench in a project. This occurs because the configuration fails to populate for the initial workbench session.
- Workaround
- Restart the workbench. After restarting, the runtime configuration becomes available for pipeline execution.
RHAIENG-35055 - Model catalog fails to initialize after upgrading from OpenShift AI 2.24
After upgrading from OpenShift AI 2.24, the model catalog might fail to initialize and load. The OpenShift AI dashboard displays a Request access to model catalog error.
- Workaround
Delete the existing model catalog ConfigMap and deployment by running the following commands:
oc delete configmap model-catalog-sources -n rhoai-model-registries --ignore-not-found $ oc delete deployment model-catalog -n rhoai-model-registries --ignore-not-found
$ oc delete configmap model-catalog-sources -n rhoai-model-registries --ignore-not-found $ oc delete deployment model-catalog -n rhoai-model-registries --ignore-not-foundCopy to Clipboard Copied! Toggle word wrap Toggle overflow
RHAIENG-35529 - Reconciliation issues in Data Science Pipelines Operator when using external Argo Workflows
If you enable the embedded Argo Workflows controllers (argoWorkflowsControllers: Managed) before deleting an existing external Argo Workflows installation, the workflow controller might fail to start and the Data Science Pipelines Operator (DSPO) might not reconcile its custom resources correctly.
- Workaround
- Before enabling the embedded Argo Workflows controllers, delete any existing external Argo Workflows instance from the cluster.
RHAIENG-36756 - Existing cluster storage option missing during model deployment when no connections exist
When creating a model deployment in a project with no defined data connections, the Existing cluster storage option does not appear, even if Persistent Volume Claims (PVCs) are available. As a result, you cannot select an existing PVC for model storage.
- Workaround
-
Create at least one connection of type
URIin the project. Afterward, the Existing cluster storage option becomes available.
RHOAIENG-36817 - Inference server fails when Model server size is set to small
When creating an inference service via the dashboard, selecting a small Model server size causes subsequent inferencing requests to fail. As a result, the deployment of the inference service itself succeeds, but the inferencing requests fail with a timeout error.
- Workaround
-
To resolve this issue, select the Model server size as
largefrom the dropdown.
RHOAIENG-33995 - Deployment of an inference service for Phi and Mistral models fails
The creation of an inference service for Phi and Mistral models using vLLM runtime on IBM Power cluster with openshift-container-platform 4.19 fails due to an error related to CPU backend. As a result, deployment of these models is affected, causing inference service creation failure.
- Workaround
-
To resolve this issue, disable the
sliding_windowmechanism in the serving runtime if it is enabled for CPU and Phi models. Sliding window is not currently supported in V1.
RHOAIENG-33795 - Manual Route creation needed for gRPC endpoint verification for Triton Inference Server on IBM Z
When verifying Triton Inference Server with gRPC endpoint, Route does not get created automatically. This happens because the Operator currently defaults to creating an edge-terminated route for REST only.
- Workaround
To resolve this issue, manual Route creation is needed for gRPC endpoint verification for Triton Inference Server on IBM Z.
When the model deployment pod is up and running, define an edge-terminated
Routeobject in a YAML file with the following contents:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
Routeobject:oc apply -f <route-file-name>.yaml
oc apply -f <route-file-name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow To send an inference request, enter the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- <ca_cert_file> is the path to your cluster router CA cert (for example, router-ca.crt).
<triton_protoset_file> is compiled as a protobuf descriptor file. You can generate it as protoc -I. --descriptor_set_out=triton_desc.pb --include_imports grpc_service.proto.
Download grpc_service.proto and model_config.proto files from the triton-inference-service GitHub page.
RHOAIENG-33697 - Unable to Edit or Delete models unless status is "Started"
When you deploy a model on the NVIDIA NIM or single-model serving platform, the Edit and Delete options in the action menu are not available for models in the Starting or Pending states. These options become available only after the model has been successfully deployed.
- Workaround
- Wait until the model is in the Started state to make any changes or to delete the model.
RHOAIENG-33645 - LM-Eval Tier1 test failures
There can be failures with LM-Eval Tier1 tests because confirm_run_unsafe_code is not passed as an argument when a job is run, if you are using an older version of the trustyai-service-operator.
- Workaround
-
Ensure that you are using the latest version of the
trustyai-service-operatorand thatAllowCodeExecutionis enabled.
RHOAIENG-29729 - Model registry Operator in a restart loop after upgrade
After upgrading from OpenShift AI version 2.22 or earlier to version 2.23 or later with the model registry component enabled, the model registry Operator might enter a restart loop. This is due to an insufficient memory limit for the manager container in the model-registry-operator-controller-manager pod.
- Workaround
To resolve this issue, you must trigger a reconciliation for the
model-registry-operator-controller-managerdeployment. Adding theopendatahub.io/managed='true'annotation to the deployment will accomplish this and apply the correct memory limit. You can add the annotation by running the following command:oc annotate deployment model-registry-operator-controller-manager -n redhat-ods-applications opendatahub.io/managed='true' --overwrite
oc annotate deployment model-registry-operator-controller-manager -n redhat-ods-applications opendatahub.io/managed='true' --overwriteCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis command overwrites custom values in the
model-registry-operator-controller-managerdeployment. For more information about custom deployment values, see Customizing component deployment resources.After the deployment updates and the memory limit increases from 128Mi to 256Mi, the container memory usage will stabilize and the restart loop will stop.
RHOAIENG-31238 - New observability stack enabled when creating DSCInitialization
When you remove a DSCInitialization resource and create a new one using OpenShift AI console form view, it enables a Technology Preview observability stack. This results in the deployment of an unwanted observability stack when recreating a DSCInitialization resource.
- Workaround
To resolve this issue, manually remove the "metrics" and "traces" fields when recreating the DSCInitiliazation resource using the form view.
This is not required if you want to use the Technology Preview observability stack.
RHOAIENG-32599 - Inference service creation fails on IBM Z cluster
When you attempt to create an inference service using the vLLM runtime on an IBM Z cluster, it fails with the following error: ValueError: 'aimv2' is already used by a Transformers config, pick another name.
- Workaround
- None.
RHOAIENG-29731 - Inference service creation fails on IBM Power cluster with OpenShift 4.19
When you attempt to create an inference service by using the vLLM runtime on an IBM Power cluster on OpenShift Container Platform version 4.19, it fails due to an error related to Non-Uniform Memory Access (NUMA).
- Workaround
-
When you create an inference service, set the environment variable
VLLM_CPU_OMP_THREADS_BINDtoall.
RHOAIENG-29292 - vLLM logs permission errors on IBM Z due to usage stats directory access
When running vLLM on the IBM Z architecture, the inference service starts successfully, but logs an error in a background thread related to usage statistics reporting. This happens because the service tries to write usage data to a restricted location (/.config), which it does not have permission to access.
The following error appears in the logs:
Exception in thread Thread-2 (_report_usage_worker): Traceback (most recent call last): ... PermissionError: [Error 13] Permission denied: '/.config'
Exception in thread Thread-2 (_report_usage_worker):
Traceback (most recent call last):
...
PermissionError: [Error 13] Permission denied: '/.config'
- Workaround
-
To prevent this error and suppress the usage stats logging, set the
VLLM_NO_USAGE_STATS=1environment variable in the inference service deployment. This disables automatic usage reporting, avoiding permission issues when you write to system directories.
RHOAIENG-24545 - Runtime images are not present in the workbench after the first start
The list of runtime images does not properly populate the first running workbench instance in the namespace, therefore no image is shown for selection in the Elyra pipeline editor.
- Workaround
- Restart the workbench. After restarting the workbench, the list of runtime images populates both the workbench and the select box for the Elyra pipeline editor.
RHOAIENG-20209 - Warning message not displayed when requested resources exceed threshold
When you click Distributed workloads → Project metrics and view the Requested resources section, the charts show the requested resource values and the total shared quota value for each resource (CPU and Memory). However, when the Requested by all projects value exceeds the Warning threshold value for that resource, the expected warning message is not displayed.
- Workaround
- None.
SRVKS-1301 (previously documented as RHOAIENG-18590) - The KnativeServing resource fails after disabling and enabling KServe
After disabling and enabling the kserve component in the DataScienceCluster, the KnativeServing resource might fail.
- Workaround
Delete all
ValidatingWebhookConfigurationandMutatingWebhookConfigurationwebhooks related to Knative:Get the webhooks:
oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration | grep -i knative
oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration | grep -i knativeCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure KServe is disabled.
Get the webhooks:
oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration | grep -i knative
oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration | grep -i knativeCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Delete the webhooks.
- Enable KServe.
-
Verify that the KServe pod can successfully spawn, and that pods in the
knative-servingnamespace are active and operational.
RHOAIENG-16247 - Elyra pipeline run outputs are overwritten when runs are launched from OpenShift AI dashboard
When a pipeline is created and run from Elyra, outputs generated by the pipeline run are stored in the folder bucket-name/pipeline-name-timestamp of object storage.
When a pipeline is created from Elyra and the pipeline run is started from the OpenShift AI dashboard, the timestamp value is not updated. This can cause pipeline runs to overwrite files created by previous pipeline runs of the same pipeline.
This issue does not affect pipelines compiled and imported using the OpenShift AI dashboard because runid is always added to the folder used in object storage. For more information about storage locations used in AI pipelines, see Storing data with pipelines.
- Workaround
- When storing files in an Elyra pipeline, use different subfolder names on each pipeline run.
OCPBUGS-49422 - AMD GPUs and AMD ROCm workbench images are not supported in a disconnected environment
This release of OpenShift AI does not support AMD GPUs and AMD ROCm workbench images in a disconnected environment because installing the AMD GPU Operator requires internet access to fetch dependencies needed to compile GPU drivers.
- Workaround
- None.
RHOAIENG-7716 - Pipeline condition group status does not update
When you run a pipeline that has loops (dsl.ParallelFor) or condition groups (dsl.lf), the UI displays a Running status for the loops and groups, even after the pipeline execution is complete.
- Workaround
You can confirm if a pipeline is still running by checking that no child tasks remain active.
- From the OpenShift AI dashboard, click Develop & train → Pipelines → Runs.
- From the Project list, click your data science project.
- From the Runs tab, click the pipeline run that you want to check the status of.
Expand the condition group and click a child task.
A panel that contains information about the child task is displayed
On the panel, click the Task details tab.
The Status field displays the correct status for the child task.
RHOAIENG-6409 - Cannot save parameter errors appear in pipeline logs for successful runs
When you run a pipeline more than once, Cannot save parameter errors appear in the pipeline logs for successful pipeline runs. You can safely ignore these errors.
- Workaround
- None.
RHOAIENG-3025 - OVMS expected directory layout conflicts with the KServe StoragePuller layout
When you use the OpenVINO Model Server (OVMS) runtime to deploy a model on the single-model serving platform (which uses KServe), there is a mismatch between the directory layout expected by OVMS and that of the model-pulling logic used by KServe. Specifically, OVMS requires the model files to be in the /<mnt>/models/1/ directory, while KServe places them in the /<mnt>/models/ directory.
- Workaround
Perform the following actions:
-
In your S3-compatible storage bucket, place your model files in a directory called
1/, for example,/<s3_storage_bucket>/models/1/<model_files>. To use the OVMS runtime to deploy a model on the single-model serving platform, choose one of the following options to specify the path to your model files:
-
If you are using the OpenShift AI dashboard to deploy your model, in the Path field for your data connection, use the
/<s3_storage_bucket>/models/format to specify the path to your model files. Do not specify the1/directory as part of the path. -
If you are creating your own
InferenceServicecustom resource to deploy your model, configure the value of thestorageURIfield as/<s3_storage_bucket>/models/. Do not specify the1/directory as part of the path.
-
If you are using the OpenShift AI dashboard to deploy your model, in the Path field for your data connection, use the
-
In your S3-compatible storage bucket, place your model files in a directory called
KServe pulls model files from the subdirectory in the path that you specified. In this case, KServe correctly pulls model files from the /<s3_storage_bucket>/models/1/ directory in your S3-compatible storage.
RHOAIENG-3018 - OVMS on KServe does not expose the correct endpoint in the dashboard
When you use the OpenVINO Model Server (OVMS) runtime to deploy a model on the single-model serving platform, the URL shown in the Inference endpoint field for the deployed model is not complete.
- Workaround
-
To send queries to the model, you must add the
/v2/models/_<model-name>_/inferstring to the end of the URL. Replace_<model-name>_with the name of your deployed model.
RHOAIENG-2228 - The performance metrics graph changes constantly when the interval is set to 15 seconds
On the Endpoint performance tab of the model metrics screen, if you set the Refresh interval to 15 seconds and the Time range to 1 hour, the graph results change continuously.
- Workaround
- None.
RHOAIENG-2183 - Endpoint performance graphs might show incorrect labels
In the Endpoint performance tab of the model metrics screen, the graph tooltip might show incorrect labels.
- Workaround
- None.
RHOAIENG-131 - gRPC endpoint not responding properly after the InferenceService reports as Loaded
When numerous InferenceService instances are generated and directed requests, Service Mesh Control Plane (SMCP) becomes unresponsive. The status of the InferenceService instance is Loaded, but the call to the gRPC endpoint returns with errors.
- Workaround
-
Edit the
ServiceMeshControlPlanecustom resource (CR) to increase the memory limit of the Istio egress and ingress pods.
RHOAIENG-1619 (previously documented as DATA-SCIENCE-PIPELINES-165) - Poor error message when S3 bucket is not writable
When you set up a data connection and the S3 bucket is not writable, and you try to upload a pipeline, the error message Failed to store pipelines is not helpful.
- Workaround
- Verify that your data connection credentials are correct and that you have write access to the bucket you specified.
RHOAIENG-1207 (previously documented as ODH-DASHBOARD-1758) - Error duplicating OOTB custom serving runtimes several times
If you duplicate a model-serving runtime several times, the duplication fails with the Serving runtime name "<name>" already exists error message.
- Workaround
-
Change the
metadata.namefield to a unique value.
RHOAIENG-133 - Existing workbench cannot run Elyra pipeline after workbench restart
If you use the Elyra JupyterLab extension to create and run pipelines within JupyterLab, and you configure the pipeline server after you created a workbench and specified a workbench image within the workbench, you cannot execute the pipeline, even after restarting the workbench.
- Workaround
- Stop the running workbench.
- Edit the workbench to make a small modification. For example, add a new dummy environment variable, or delete an existing unnecessary environment variable. Save your changes.
- Restart the workbench.
- In the left sidebar of JupyterLab, click Runtimes.
- Confirm that the default runtime is selected.
RHODS-12798 - Pods fail with "unable to init seccomp" error
Pods fail with CreateContainerError status or Pending status instead of Running status, because of a known kernel bug that introduced a seccomp memory leak. When you check the events on the namespace where the pod is failing, or run the oc describe pod command, the following error appears:
runc create failed: unable to start container process: unable to init seccomp: error loading seccomp filter into kernel: error loading seccomp filter: errno 524
runc create failed: unable to start container process: unable to init seccomp: error loading seccomp filter into kernel: error loading seccomp filter: errno 524
- Workaround
-
Increase the value of
net.core.bpf_jit_limitas described in the Red Hat Knowledgebase solution Pods failing with error loading seccomp filter into kernel: errno 524 in OpenShift 4.
KUBEFLOW-177 - Bearer token from application not forwarded by OAuth-proxy
You cannot use an application as a custom workbench image if its internal authentication mechanism is based on a bearer token. The OAuth-proxy configuration removes the bearer token from the headers, and the application cannot work properly.
- Workaround
- None.
KUBEFLOW-157 - Logging out of JupyterLab does not work if you are already logged out of the OpenShift AI dashboard
If you log out of the OpenShift AI dashboard before you log out of JupyterLab, then logging out of JupyterLab is not successful. For example, if you know the URL for a Jupyter notebook, you are able to open this again in your browser.
- Workaround
- Log out of JupyterLab before you log out of the OpenShift AI dashboard.
RHODS-7718 - User without dashboard permissions is able to continue using their running workbenches indefinitely
When a Red Hat OpenShift AI administrator revokes a user’s permissions, the user can continue to use their running workbenches indefinitely.
- Workaround
- When the OpenShift AI administrator revokes a user’s permissions, the administrator should also stop any running workbenches for that user.
RHODS-5543 - When using the NVIDIA GPU Operator, more nodes than needed are created by the Node Autoscaler
When a pod cannot be scheduled due to insufficient available resources, the Node Autoscaler creates a new node. There is a delay until the newly created node receives the relevant GPU workload. Consequently, the pod cannot be scheduled and the Node Autoscaler’s continuously creates additional new nodes until one of the nodes is ready to receive the GPU workload. For more information about this issue, see the Red Hat Knowledgebase solution When using the NVIDIA GPU Operator, more nodes than needed are created by the Node Autoscaler.
- Workaround
-
Apply the
cluster-api/acceleratorlabel inmachineset.spec.template.spec.metadata. This causes the autoscaler to consider those nodes as unready until the GPU driver has been deployed.
RHODS-4799 - Tensorboard requires manual steps to view
When a user has TensorFlow or PyTorch workbench images and wants to use TensorBoard to display data, manual steps are necessary to include environment variables in the workbench environment, and to import those variables for use in your code.
- Workaround
When you start your basic workbench, use the following code to set the value for the TENSORBOARD_PROXY_URL environment variable to use your OpenShift AI user ID.
import os os.environ["TENSORBOARD_PROXY_URL"]= os.environ["NB_PREFIX"]+"/proxy/6006/"
import os os.environ["TENSORBOARD_PROXY_URL"]= os.environ["NB_PREFIX"]+"/proxy/6006/"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
RHODS-4718 - The Intel® oneAPI AI Analytics Toolkits quick start references nonexistent sample notebooks
The Intel® oneAPI AI Analytics Toolkits quick start, located on the Resources page on the dashboard, requires the user to load sample notebooks as part of the instruction steps, but refers to notebooks that do not exist in the associated repository.
- Workaround
- None.
RHOAING-1147 (previously documented as RHODS-2881) - Actions on dashboard not clearly visible
The dashboard actions to revalidate a disabled application license and to remove a disabled application tile are not clearly visible to the user. These actions appear when the user clicks on the application tile’s Disabled label. As a result, the intended workflows might not be clear to the user.
- Workaround
- None.
RHODS-2096 - IBM Watson Studio not available in OpenShift AI
IBM Watson Studio is not available when OpenShift AI is installed on OpenShift Dedicated 4.9 or higher, because it is not compatible with these versions of OpenShift Dedicated.
- Workaround
- Contact the Red Hat Customer Portal for assistance with manually configuring Watson Studio on OpenShift Dedicated 4.9 and higher.
Chapter 9. Product features Copy linkLink copied to clipboard!
Red Hat OpenShift AI provides a rich set of features for data scientists and cluster administrators. To learn more, see Introduction to Red Hat OpenShift AI.