Questo contenuto non è disponibile nella lingua selezionata.

Chapter 6. Support removals


This section describes major changes in support for user-facing features in Red Hat OpenShift AI. For information about OpenShift AI supported software platforms, components, and dependencies, see the Red Hat OpenShift AI: Supported Configurations Knowledgebase article.

6.1. Deprecated

Deprecated annotation format for Connection Secrets

Starting with OpenShift AI 3.0, the opendatahub.io/connection-type-ref annotation format for creating Connection Secrets is deprecated.

For all new Connection Secrets, use the opendatahub.io/connection-type-protocol annotation instead. While both formats are currently supported, connection-type-protocol takes precedence and should be used for future compatibility.

6.1.1. Deprecated Kubeflow Training operator v1

The Kubeflow Training Operator (v1) is deprecated starting OpenShift AI 2.25 and is planned to be removed in a future release. This deprecation is part of our transition to Kubeflow Trainer v2, which delivers enhanced capabilities and improved functionality.

6.1.2. Deprecated TrustyAI service CRD v1alpha1

Starting with OpenShift AI 2.25, the v1apha1 version is deprecated and planned for removal in an upcoming release. You must update the TrustyAI Operator to version v1 to receive future Operator updates.

6.1.3. Deprecated KServe Serverless deployment mode

Starting with OpenShift AI 2.25, The KServe Serverless deployment mode is deprecated. You can continue to deploy models by migrating to the KServe RawDeployment mode. If you are upgrading to Red Hat OpenShift AI 3.0, all workloads that use the retired Serverless or ModelMesh modes must be migrated before upgrading.

6.1.4. Deprecated model registry API v1alpha1

Starting with OpenShift AI 2.24, the model registry API version v1alpha1 is deprecated and will be removed in a future release of OpenShift AI. The latest model registry API version is v1beta1.

6.1.5. Multi-model serving platform (ModelMesh)

Starting with OpenShift AI version 2.19, the multi-model serving platform based on ModelMesh is deprecated. You can continue to deploy models on the multi-model serving platform, but it is recommended that you migrate to the single-model serving platform.

For more information or for help on using the single-model serving platform, contact your account manager.

6.1.6. Accelerator Profiles and legacy Container Size selector deprecated

Starting with OpenShift AI 3.0, Accelerator Profiles and the Container Size selector for workbenches are deprecated.

+ These features are replaced by the more flexible and unified Hardware Profiles capability.

6.1.7. Deprecated OpenVINO Model Server (OVMS) plugin

The CUDA plugin for the OpenVINO Model Server (OVMS) is now deprecated and will no longer be available in future releases of OpenShift AI.

Previously, cluster administrators used the groupsConfig option in the OdhDashboardConfig resource to manage the OpenShift groups (both administrators and non-administrators) that can access the OpenShift AI dashboard. Starting with OpenShift AI 2.17, this functionality has moved to the Auth resource. If you have workflows (such as GitOps workflows) that interact with OdhDashboardConfig, you must update them to reference the Auth resource instead.

Expand
Table 6.1. Updated configurations
Resource2.16 and earlier2.17 and later versions

apiVersion

opendatahub.io/v1alpha

services.platform.opendatahub.io/v1alpha1

kind

OdhDashboardConfig

Auth

name

odh-dashboard-config

auth

Admin groups

spec.groupsConfig.adminGroups

spec.adminGroups

User groups

spec.groupsConfig.allowedGroups

spec.allowedGroups

6.1.9. Deprecated cluster configuration parameters

When using the CodeFlare SDK to run distributed workloads in Red Hat OpenShift AI, the following parameters in the Ray cluster configuration are now deprecated and should be replaced with the new parameters as indicated.

Expand
Deprecated parameterReplaced by

head_cpus

head_cpu_requests, head_cpu_limits

head_memory

head_memory_requests, head_memory_limits

min_cpus

worker_cpu_requests

max_cpus

worker_cpu_limits

min_memory

worker_memory_requests

max_memory

worker_memory_limits

head_gpus

head_extended_resource_requests

num_gpus

worker_extended_resource_requests

You can also use the new extended_resource_mapping and overwrite_default_resource_mapping parameters, as appropriate. For more information about these new parameters, see the CodeFlare SDK documentation (external).

6.2. Removed functionality

Caikit-NLP component removed

The caikit-nlp component has been formally deprecated and removed from OpenShift AI 3.0.

This runtime is no longer included or supported in OpenShift AI. Users should migrate any dependent workloads to supported model serving runtimes.

TGIS component removed

The TGIS component, which was deprecated in OpenShift AI 2.19, has been removed in OpenShift AI 3.0.

TGIS continued to be supported through the OpenShift AI 2.16 Extended Update Support (EUS) lifecycle, which ended in June 2025.

Starting with this release, TGIS is no longer available or supported. Users should migrate their model serving workloads to supported runtimes such as Caikit or Caikit-TGIS.

AppWrapper Controller removed

The AppWrapper controller has been removed from OpenShift AI as part of the broader CodeFlare Operator removal process.

This change eliminates redundant functionality and reduces maintenance overhead and architectural complexity.

6.2.1. CodeFlare Operator removed

Starting with OpenShift AI 3.0, the CodeFlare Operator has been removed.

+ The functionality previously provided by the CodeFlare Operator is now included in the KubeRay Operator, which provides equivalent capabilities such as mTLS, network isolation, and authentication.

LAB-tuning feature removed

Starting with OpenShift AI 3.0, the LAB-tuning feature has been removed.

Users who previously relied on LAB-tuning for large language model customization should migrate to alternative fine-tuning or model customization methods.

Embedded Kueue component removed

The embedded Kueue component, which was deprecated in OpenShift AI 2.24, has been removed in OpenShift AI 3.0.

OpenShift AI now uses the Red Hat Build of the Kueue Operator to provide enhanced workload scheduling across distributed training, workbench, and model serving workloads.

The embedded Kueue component is not supported in any Extended Update Support (EUS) release.

Removal of DataSciencePipelinesApplication v1alpha1 API version

The v1alpha1 API version of the DataSciencePipelinesApplication custom resource (datasciencepipelinesapplications.opendatahub.io/v1alpha1) has been removed.

OpenShift AI now uses the stable v1 API version (datasciencepipelinesapplications.opendatahub.io/v1).

You must update any existing manifests or automation to reference the v1 API version to ensure compatibility with OpenShift AI 3.0 and later.

6.2.2. Microsoft SQL Server command-line tool removal

Starting with OpenShift AI 2.24, the Microsoft SQL Server command-line tools (sqlcmd, bcp) have been removed from workbenches. You can no longer manage Microsoft SQL Server using the preinstalled command-line client.

6.2.3. Model registry ML Metadata (MLMD) server removal

Starting with OpenShift AI 2.23, the ML Metadata (MLMD) server has been removed from the model registry component. The model registry now interacts directly with the underlying database by using the existing model registry API and database schema. This change simplifies the overall architecture and ensures the long-term maintainability and efficiency of the model registry by transitioning from the ml-metadata component to direct database access within the model registry itself.

If you see the following error for your model registry deployment, this means that your database schema migration has failed:

error: error connecting to datastore: Dirty database version {version}. Fix and force version.
Copy to Clipboard Toggle word wrap

You can fix this issue by manually changing the database from a dirty state to 0 before traffic can be routed to the pod. Perform the following steps:

  1. Find the name of your model registry database pod as follows:

    kubectl get pods -n <your-namespace> | grep model-registry-db

    Replace <your-namespace> with the namespace where your model registry is deployed.

  2. Use kubectl exec to run the query on the model registry database pod as follows:

    kubectl exec -n <your-namespace> <your-db-pod-name> -c mysql -- mysql -u root -p"$MYSQL_ROOT_PASSWORD" -e "USE <your-db-name>; UPDATE schema_migrations SET dirty = 0;"

    Replace <your-namespace> with your model registry namespace and <your-db-pod-name> with the pod name that you found in the previous step. Replace <your-db-name> with your model registry database name.

    This will reset the dirty state in the database, allowing the model registry to start correctly.

6.2.4. Embedded subscription channel not used in some versions

For OpenShift AI 2.8 to 2.20 and 2.22 to 3.0, the embedded subscription channel is not used. You cannot select the embedded channel for a new installation of the Operator for those versions. For more information about subscription channels, see Installing the Red Hat OpenShift AI Operator.

6.2.5. Anaconda removal

Anaconda is an open source distribution of the Python and R programming languages. Starting with OpenShift AI version 2.18, Anaconda is no longer included in OpenShift AI, and Anaconda resources are no longer supported or managed by OpenShift AI.

If you previously installed Anaconda from OpenShift AI, a cluster administrator must complete the following steps from the OpenShift command-line interface to remove the Anaconda-related artifacts:

  1. Remove the secret that contains your Anaconda password:

    oc delete secret -n redhat-ods-applications anaconda-ce-access

  2. Remove the ConfigMap for the Anaconda validation cronjob:

    oc delete configmap -n redhat-ods-applications anaconda-ce-validation-result

  3. Remove the Anaconda image stream:

    oc delete imagestream -n redhat-ods-applications s2i-minimal-notebook-anaconda

  4. Remove the Anaconda job that validated the downloading of images:

    oc delete job -n redhat-ods-applications anaconda-ce-periodic-validator-job-custom-run

  5. Remove any pods related to Anaconda cronjob runs:

    oc get pods n redhat-ods-applications --no-headers=true | awk '/anaconda-ce-periodic-validator-job-custom-run*/'

Logs are no longer stored in S3-compatible storage for Python scripts which are running in Elyra pipelines. From OpenShift AI version 2.11, you can view these logs in the pipeline log viewer in the OpenShift AI dashboard.

Note

For this change to take effect, you must use the Elyra runtime images provided in workbench images at version 2024.1 or later.

If you have an older workbench image version, update the Version selection field to a compatible workbench image version, for example, 2024.1, as described in Updating a project workbench.

Updating your workbench image version will clear any existing runtime image selections for your pipeline. After you have updated your workbench version, open your workbench IDE and update the properties of your pipeline to select a runtime image.

6.2.7. Beta subscription channel no longer used

Starting with OpenShift AI 2.5, the beta subscription channel is no longer used. You can no longer select the beta channel for a new installation of the Operator. For more information about subscription channels, see Installing the Red Hat OpenShift AI Operator.

6.2.8. HabanaAI workbench image removal

Support for the HabanaAI 1.10 workbench image has been removed. New installations of OpenShift AI from version 2.14 do not include the HabanaAI workbench image. However, if you upgrade OpenShift AI from a previous version, the HabanaAI workbench image remains available, and existing HabanaAI workbench images continue to function.

Torna in cima
Red Hat logoGithubredditYoutubeTwitter

Formazione

Prova, acquista e vendi

Community

Informazioni sulla documentazione di Red Hat

Aiutiamo gli utenti Red Hat a innovarsi e raggiungere i propri obiettivi con i nostri prodotti e servizi grazie a contenuti di cui possono fidarsi. Esplora i nostri ultimi aggiornamenti.

Rendiamo l’open source più inclusivo

Red Hat si impegna a sostituire il linguaggio problematico nel codice, nella documentazione e nelle proprietà web. Per maggiori dettagli, visita il Blog di Red Hat.

Informazioni su Red Hat

Forniamo soluzioni consolidate che rendono più semplice per le aziende lavorare su piattaforme e ambienti diversi, dal datacenter centrale all'edge della rete.

Theme

© 2025 Red Hat