Chapter 5. Support removals
This section describes major changes in support for user-facing features in Red Hat OpenShift AI. For information about OpenShift AI supported software platforms, components, and dependencies, see the Red Hat OpenShift AI: Supported Configurations Knowledgebase article.
5.1. Deprecated Copy linkLink copied to clipboard!
5.1.1. Deprecated Kubeflow Training operator v1 Copy linkLink copied to clipboard!
The Kubeflow Training Operator (v1) is deprecated starting OpenShift AI 2.25 and is planned to be removed in a future release. This deprecation is part of our transition to Kubeflow Trainer v2, which delivers enhanced capabilities and improved functionality.
5.1.2. Deprecated TrustyAI service CRD v1alpha1 Copy linkLink copied to clipboard!
Starting with OpenShift AI 2.25, the v1apha1 version is deprecated and planned for removal in an upcoming release. You must update the TrustyAI Operator to version v1 to receive future Operator updates.
5.1.3. Deprecated KServe Serverless deployment mode Copy linkLink copied to clipboard!
Starting with OpenShift AI 2.25, The KServe Serverless deployment mode is deprecated. You can continue to deploy models by migrating to the KServe RawDeployment mode. If you are upgrading to Red Hat OpenShift AI 3.0, all workloads that use the retired Serverless or ModelMesh modes must be migrated before upgrading.
5.1.4. Deprecated LAB-tuning Copy linkLink copied to clipboard!
Starting with OpenShift AI 2.25, the LAB-tuning feature is deprecated. If you are using LAB-tuning for large language model customization, plan to migrate to alternative fine-tuning or model customization methods as they become available.
5.1.5. Deprecated embedded Kueue component Copy linkLink copied to clipboard!
Starting with OpenShift AI 2.24, the embedded Kueue component for managing distributed workloads is deprecated. OpenShift AI now uses the Red Hat Build of Kueue Operator to provide enhanced workload scheduling across distributed training, workbench, and model serving workloads. The deprecated embedded Kueue component is not supported in any Extended Update Support (EUS) release. To ensure workloads continue using queue management, you must migrate from the embedded Kueue component to the Red Hat Build of Kueue Operator, which requires OpenShift Container Platform 4.18 or later. To migrate, complete the following steps:
- Install the Red Hat Build of Kueue Operator from OperatorHub.
-
Edit your
DataScienceClustercustom resource to set thespec.components.kueue.managementStatefield toUnmanaged. -
Verify that existing Kueue configurations (
ClusterQueueandLocalQueue) are preserved after migration.
For detailed instructions, see Migrating to the Red Hat build of Kueue Operator.
5.1.6. Deprecated CodeFlare Operator Copy linkLink copied to clipboard!
Starting with OpenShift AI 2.24, the CodeFlare Operator is deprecated and will be removed in a future release of OpenShift AI.
This deprecation does not affect the Red Hat OpenShift AI API tiers.
5.1.7. Deprecated model registry API v1alpha1 Copy linkLink copied to clipboard!
Starting with OpenShift AI 2.24, the model registry API version v1alpha1 is deprecated and will be removed in a future release of OpenShift AI. The latest model registry API version is v1beta1.
5.1.8. Multi-model serving platform (ModelMesh) Copy linkLink copied to clipboard!
Starting with OpenShift AI version 2.19, the multi-model serving platform based on ModelMesh is deprecated. You can continue to deploy models on the multi-model serving platform, but it is recommended that you migrate to the single-model serving platform.
For more information or for help on using the single-model serving platform, contact your account manager.
5.1.9. Deprecated Text Generation Inference Server (TGIS) Copy linkLink copied to clipboard!
Starting with OpenShift AI version 2.19, the Text Generation Inference Server (TGIS) is deprecated. TGIS will continue to be supported through the OpenShift AI 2.16 EUS lifecycle. Caikit-TGIS and Caikit are not affected and will continue to be supported. The out-of-the-box serving runtime template will no longer be deployed. vLLM is recommended as a replacement runtime for TGIS.
5.1.10. Deprecated accelerator profiles Copy linkLink copied to clipboard!
Accelerator profiles are now deprecated. To target specific worker nodes for workbenches or model serving workloads, use hardware profiles.
5.1.11. Deprecated OpenVINO Model Server (OVMS) plugin Copy linkLink copied to clipboard!
The CUDA plugin for the OpenVINO Model Server (OVMS) is now deprecated and will no longer be available in future releases of OpenShift AI.
5.1.12. OpenShift AI dashboard user management moved from OdhDashboardConfig to Auth resource Copy linkLink copied to clipboard!
Previously, cluster administrators used the groupsConfig option in the OdhDashboardConfig resource to manage the OpenShift groups (both administrators and non-administrators) that can access the OpenShift AI dashboard. Starting with OpenShift AI 2.17, this functionality has moved to the Auth resource. If you have workflows (such as GitOps workflows) that interact with OdhDashboardConfig, you must update them to reference the Auth resource instead.
| Resource | 2.16 and earlier | 2.17 and later versions |
|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
| Admin groups |
|
|
| User groups |
|
|
5.1.13. Deprecated cluster configuration parameters Copy linkLink copied to clipboard!
When using the CodeFlare SDK to run distributed workloads in Red Hat OpenShift AI, the following parameters in the Ray cluster configuration are now deprecated and should be replaced with the new parameters as indicated.
| Deprecated parameter | Replaced by |
|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
You can also use the new extended_resource_mapping and overwrite_default_resource_mapping parameters, as appropriate. For more information about these new parameters, see the CodeFlare SDK documentation (external).
5.2. Removed functionality Copy linkLink copied to clipboard!
5.2.1. Microsoft SQL Server command-line tool removal Copy linkLink copied to clipboard!
Starting with OpenShift AI 2.24, the Microsoft SQL Server command-line tools (sqlcmd, bcp) have been removed from workbenches. You can no longer manage Microsoft SQL Server using the preinstalled command-line client.
5.2.2. Model registry ML Metadata (MLMD) server removal Copy linkLink copied to clipboard!
Starting with OpenShift AI 2.23, the ML Metadata (MLMD) server has been removed from the model registry component. The model registry now interacts directly with the underlying database by using the existing model registry API and database schema. This change simplifies the overall architecture and ensures the long-term maintainability and efficiency of the model registry by transitioning from the ml-metadata component to direct database access within the model registry itself.
If you see the following error for your model registry deployment, this means that your database schema migration has failed:
error: error connecting to datastore: Dirty database version {version}. Fix and force version.
error: error connecting to datastore: Dirty database version {version}. Fix and force version.
You can fix this issue by manually changing the database from a dirty state to 0 before traffic can be routed to the pod. Perform the following steps:
Find the name of your model registry database pod as follows:
kubectl get pods -n <your-namespace> | grep model-registry-dbReplace
<your-namespace>with the namespace where your model registry is deployed.Use
kubectl execto run the query on the model registry database pod as follows:kubectl exec -n <your-namespace> <your-db-pod-name> -c mysql -- mysql -u root -p"$MYSQL_ROOT_PASSWORD" -e "USE <your-db-name>; UPDATE schema_migrations SET dirty = 0;"Replace
<your-namespace>with your model registry namespace and<your-db-pod-name>with the pod name that you found in the previous step. Replace<your-db-name>with your model registry database name.This will reset the dirty state in the database, allowing the model registry to start correctly.
5.2.3. Embedded subscription channel not used in some versions Copy linkLink copied to clipboard!
For OpenShift AI 2.8 to 2.20 and 2.22 to 2.25, the embedded subscription channel is not used. You cannot select the embedded channel for a new installation of the Operator for those versions. For more information about subscription channels, see Installing the Red Hat OpenShift AI Operator.
5.2.4. Anaconda removal Copy linkLink copied to clipboard!
Anaconda is an open source distribution of the Python and R programming languages. Starting with OpenShift AI version 2.18, Anaconda is no longer included in OpenShift AI, and Anaconda resources are no longer supported or managed by OpenShift AI.
If you previously installed Anaconda from OpenShift AI, a cluster administrator must complete the following steps from the OpenShift command-line interface to remove the Anaconda-related artifacts:
Remove the secret that contains your Anaconda password:
oc delete secret -n redhat-ods-applications anaconda-ce-accessRemove the
ConfigMapfor the Anaconda validation cronjob:oc delete configmap -n redhat-ods-applications anaconda-ce-validation-resultRemove the Anaconda image stream:
oc delete imagestream -n redhat-ods-applications s2i-minimal-notebook-anacondaRemove the Anaconda job that validated the downloading of images:
oc delete job -n redhat-ods-applications anaconda-ce-periodic-validator-job-custom-runRemove any pods related to Anaconda cronjob runs:
oc get pods n redhat-ods-applications --no-headers=true | awk '/anaconda-ce-periodic-validator-job-custom-run*/'
5.2.5. Pipeline logs for Python scripts running in Elyra pipelines are no longer stored in S3 Copy linkLink copied to clipboard!
Logs are no longer stored in S3-compatible storage for Python scripts which are running in Elyra pipelines. From OpenShift AI version 2.11, you can view these logs in the pipeline log viewer in the OpenShift AI dashboard.
For this change to take effect, you must use the Elyra runtime images provided in workbench images at version 2024.1 or later.
If you have an older workbench image version, update the Version selection field to a compatible workbench image version, for example, 2024.1, as described in Updating a project workbench.
Updating your workbench image version will clear any existing runtime image selections for your pipeline. After you have updated your workbench version, open your workbench IDE and update the properties of your pipeline to select a runtime image.
5.2.6. Beta subscription channel no longer used Copy linkLink copied to clipboard!
Starting with OpenShift AI 2.5, the beta subscription channel is no longer used. You can no longer select the beta channel for a new installation of the Operator. For more information about subscription channels, see Installing the Red Hat OpenShift AI Operator.
5.2.7. HabanaAI workbench image removal Copy linkLink copied to clipboard!
Support for the HabanaAI 1.10 workbench image has been removed. New installations of OpenShift AI from version 2.14 do not include the HabanaAI workbench image. However, if you upgrade OpenShift AI from a previous version, the HabanaAI workbench image remains available, and existing HabanaAI workbench images continue to function.