このコンテンツは選択した言語では利用できません。
Chapter 5. Support removals
This section describes major changes in support for user-facing features in Red Hat OpenShift AI. For information about OpenShift AI supported software platforms, components, and dependencies, see the Red Hat OpenShift AI: Supported Configurations Knowledgebase article.
5.1. Deprecated リンクのコピーリンクがクリップボードにコピーされました!
5.1.1. Upcoming deprecation of the embedded Kueue component リンクのコピーリンクがクリップボードにコピーされました!
The embedded Kueue component that is managed as part of OpenShift AI will be deprecated beginning with OpenShift AI 2.24. After deprecation, Kueue will be provided by Red Hat OpenShift. The embedded Kueue component will not be supported in any Extended Update Support (EUS) release.
You can configure Kueue with one of the following management states in your DataScienceCluster
custom resource:
- Managed: The Operator manages the embedded Kueue component (default).
- Unmanaged: The Operator integrates with an externally-installed Red Hat Build of Kueue Operator.
- Removed: The Operator does not install Kueue, or it removes it if already installed.
To prepare for this deprecation, you can migrate from the embedded Kueue component to the external Red Hat Build of Kueue Operator. This configuration is supported in OpenShift AI 2.23 as a Technology Preview feature and requires OpenShift Container Platform 4.18 or later and the Red Hat build of Kueue 1.0.
To migrate, complete the following steps:
- Install the Red Hat build of Kueue Operator from OperatorHub.
-
Edit your
DataScienceCluster
custom resource to set thespec.components.kueue.managementState
field toUnmanaged
. - Verify that existing Kueue configurations (ClusterQueues and LocalQueues) are preserved after the migration.
For detailed instructions, see Migrating to the Red Hat build of Kueue Operator.
This deprecation does not affect the Red Hat OpenShift AI API tiers.
5.1.2. Multi-model serving platform (ModelMesh) リンクのコピーリンクがクリップボードにコピーされました!
Starting with OpenShift AI version 2.19, the multi-model serving platform based on ModelMesh is deprecated. You can continue to deploy models on the multi-model serving platform, but it is recommended that you migrate to the single-model serving platform.
For more information or for help on using the single-model serving platform, contact your account manager.
5.1.3. Deprecated Text Generation Inference Server (TGIS) リンクのコピーリンクがクリップボードにコピーされました!
Starting with OpenShift AI version 2.19, the Text Generation Inference Server (TGIS) is deprecated. TGIS will continue to be supported through the OpenShift AI 2.16 EUS lifecycle. Caikit-TGIS and Caikit are not affected and will continue to be supported. The out-of-the-box serving runtime template will no longer be deployed. vLLM is recommended as a replacement runtime for TGIS.
5.1.4. Deprecated accelerator profiles リンクのコピーリンクがクリップボードにコピーされました!
Accelerator profiles are now deprecated. To target specific worker nodes for workbenches or model serving workloads, use hardware profiles.
5.1.5. Deprecated OpenVINO Model Server (OVMS) plugin リンクのコピーリンクがクリップボードにコピーされました!
The CUDA plugin for the OpenVINO Model Server (OVMS) is now deprecated and will no longer be available in future releases of OpenShift AI.
5.1.6. OpenShift AI dashboard user management moved from OdhDashboardConfig to Auth resource リンクのコピーリンクがクリップボードにコピーされました!
Previously, cluster administrators used the groupsConfig
option in the OdhDashboardConfig
resource to manage the OpenShift groups (both administrators and non-administrators) that can access the OpenShift AI dashboard. Starting with OpenShift AI 2.17, this functionality has moved to the Auth
resource. If you have workflows (such as GitOps workflows) that interact with OdhDashboardConfig
, you must update them to reference the Auth
resource instead.
Resource | 2.16 and earlier | 2.17 and later versions |
---|---|---|
|
|
|
|
|
|
|
|
|
Admin groups |
|
|
User groups |
|
|
5.1.7. Deprecated cluster configuration parameters リンクのコピーリンクがクリップボードにコピーされました!
When using the CodeFlare SDK to run distributed workloads in Red Hat OpenShift AI, the following parameters in the Ray cluster configuration are now deprecated and should be replaced with the new parameters as indicated.
Deprecated parameter | Replaced by |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
You can also use the new extended_resource_mapping
and overwrite_default_resource_mapping
parameters, as appropriate. For more information about these new parameters, see the CodeFlare SDK documentation (external).
5.2. Removed functionality リンクのコピーリンクがクリップボードにコピーされました!
5.2.1. Model registry ML Metadata (MLMD) server removal リンクのコピーリンクがクリップボードにコピーされました!
Starting with OpenShift AI 2.23, the ML Metadata (MLMD) server has been removed from the model registry component. The model registry now interacts directly with the underlying database by using the existing model registry API and database schema. This change simplifies the overall architecture and ensures the long-term maintainability and efficiency of the model registry by transitioning from the ml-metadata
component to direct database access within the model registry itself.
If you see the following error for your model registry deployment, this means that your database schema migration has failed:
error: error connecting to datastore: Dirty database version {version}. Fix and force version.
error: error connecting to datastore: Dirty database version {version}. Fix and force version.
You can fix this issue by manually changing the database from a dirty state to 0 before traffic can be routed to the pod. Perform the following steps:
Find the name of your model registry database pod as follows:
kubectl get pods -n <your-namespace> | grep model-registry-db
Replace
<your-namespace>
with the namespace where your model registry is deployed.Use
kubectl exec
to run the query on the model registry database pod as follows:kubectl exec -n <your-namespace> <your-db-pod-name> -c mysql -- mysql -u root -p"$MYSQL_ROOT_PASSWORD" -e "USE <your-db-name>; UPDATE schema_migrations SET dirty = 0;"
Replace
<your-namespace>
with your model registry namespace and<your-db-pod-name>
with the pod name that you found in the previous step. Replace<your-db-name>
with your model registry database name.This will reset the dirty state in the database, allowing the model registry to start correctly.
5.2.2. Embedded subscription channel not used in some versions リンクのコピーリンクがクリップボードにコピーされました!
For OpenShift AI 2.8 to 2.20 and 2.22 to 2.23, the embedded
subscription channel is not used. You cannot select the embedded
channel for a new installation of the Operator for those versions. For more information about subscription channels, see Installing the Red Hat OpenShift AI Operator.
5.2.3. Anaconda removal リンクのコピーリンクがクリップボードにコピーされました!
Anaconda is an open source distribution of the Python and R programming languages. Starting with OpenShift AI version 2.18, Anaconda is no longer included in OpenShift AI, and Anaconda resources are no longer supported or managed by OpenShift AI.
If you previously installed Anaconda from OpenShift AI, a cluster administrator must complete the following steps from the OpenShift command-line interface to remove the Anaconda-related artifacts:
Remove the secret that contains your Anaconda password:
oc delete secret -n redhat-ods-applications anaconda-ce-access
Remove the
ConfigMap
for the Anaconda validation cronjob:oc delete configmap -n redhat-ods-applications anaconda-ce-validation-result
Remove the Anaconda image stream:
oc delete imagestream -n redhat-ods-applications s2i-minimal-notebook-anaconda
Remove the Anaconda job that validated the downloading of images:
oc delete job -n redhat-ods-applications anaconda-ce-periodic-validator-job-custom-run
Remove any pods related to Anaconda cronjob runs:
oc get pods n redhat-ods-applications --no-headers=true | awk '/anaconda-ce-periodic-validator-job-custom-run*/'
5.2.4. Data science pipelines v1 support removed リンクのコピーリンクがクリップボードにコピーされました!
Previously, data science pipelines in OpenShift AI were based on KubeFlow Pipelines v1. Starting with OpenShift AI 2.9, data science pipelines are based on KubeFlow Pipelines v2, which uses a different workflow engine. Data science pipelines 2.0 is enabled and deployed by default in OpenShift AI.
Starting with OpenShift AI 2.16, data science pipelines 1.0 resources are no longer supported or managed by OpenShift AI. It is no longer possible to deploy, view, or edit the details of pipelines that are based on data science pipelines 1.0 from either the dashboard or the KFP API server.
OpenShift AI does not automatically migrate existing data science pipelines 1.0 instances to 2.0. If you are upgrading to OpenShift AI 2.16 or later, you must manually migrate your existing data science pipelines 1.0 instances. For more information, see Migrating to data science pipelines 2.0.
Data science pipelines 2.0 contains an installation of Argo Workflows. Red Hat does not support direct customer use of this instance of Argo Workflows. To install or upgrade to OpenShift AI 2.16 or later with data science pipelines 2.0, ensure that there is no existing installation of Argo Workflows on your cluster.
5.2.5. Pipeline logs for Python scripts running in Elyra pipelines are no longer stored in S3 リンクのコピーリンクがクリップボードにコピーされました!
Logs are no longer stored in S3-compatible storage for Python scripts which are running in Elyra pipelines. From OpenShift AI version 2.11, you can view these logs in the pipeline log viewer in the OpenShift AI dashboard.
For this change to take effect, you must use the Elyra runtime images provided in workbench images at version 2024.1 or later.
If you have an older workbench image version, update the Version selection field to a compatible workbench image version, for example, 2024.1, as described in Updating a project workbench.
Updating your workbench image version will clear any existing runtime image selections for your pipeline. After you have updated your workbench version, open your workbench IDE and update the properties of your pipeline to select a runtime image.
5.2.6. Beta subscription channel no longer used リンクのコピーリンクがクリップボードにコピーされました!
Starting with OpenShift AI 2.5, the beta
subscription channel is no longer used. You can no longer select the beta
channel for a new installation of the Operator. For more information about subscription channels, see Installing the Red Hat OpenShift AI Operator.
5.2.7. HabanaAI workbench image removal リンクのコピーリンクがクリップボードにコピーされました!
Support for the HabanaAI 1.10 workbench image has been removed. New installations of OpenShift AI from version 2.14 do not include the HabanaAI workbench image. However, if you upgrade OpenShift AI from a previous version, the HabanaAI workbench image remains available, and existing HabanaAI workbench images continue to function.