Chapter 6. Support removals
This section describes major changes in support for user-facing features in Red Hat OpenShift AI. For information about OpenShift AI supported software platforms, components, and dependencies, see Supported configurations.
6.1. Removed functionality Copy linkLink copied to clipboard!
6.1.1. Functionality removed in Red Hat OpenShift AI 2.16.2 (March 2025) Copy linkLink copied to clipboard!
- Anaconda removal
Anaconda is an open source distribution of the Python and R programming languages. In Red Hat OpenShift AI 2.16.2, Anaconda is no longer included in OpenShift AI, and Anaconda resources are no longer supported or managed by OpenShift AI.
If you previously installed Anaconda from OpenShift AI, complete the following steps to remove the Anaconda-related artifacts:
Remove the secret that contains your Anaconda password:
oc delete secret -n redhat-ods-applications anaconda-ce-access
oc delete secret -n redhat-ods-applications anaconda-ce-accessCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the ConfigMap for the Anaconda validation cronjob:
oc delete configmap -n redhat-ods-applications anaconda-ce-validation-result
oc delete configmap -n redhat-ods-applications anaconda-ce-validation-resultCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the Anaconda image stream:
oc delete imagestream -n redhat-ods-applications s2i-minimal-notebook-anaconda
oc delete imagestream -n redhat-ods-applications s2i-minimal-notebook-anacondaCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the Anaconda job that validated the downloading of images:
oc delete job -n redhat-ods-applications anaconda-ce-periodic-validator-job-custom-run
oc delete job -n redhat-ods-applications anaconda-ce-periodic-validator-job-custom-runCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove any pods related to Anaconda cronjob runs:
oc get pods n redhat-ods-applications --no-headers=true | awk '/anaconda-ce-periodic-validator-job-custom-run*/'
oc get pods n redhat-ods-applications --no-headers=true | awk '/anaconda-ce-periodic-validator-job-custom-run*/'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.1.2. Functionality removed in Red Hat OpenShift AI 2.16.1 (January 2025) Copy linkLink copied to clipboard!
No functionality removed in this release.
6.1.3. Functionality removed in Red Hat OpenShift AI 2.16.0 (December 2024) Copy linkLink copied to clipboard!
- Data science pipelines v1 support removed
Previously, data science pipelines in OpenShift AI were based on KubeFlow Pipelines v1. Starting with OpenShift AI 2.9, data science pipelines are based on KubeFlow Pipelines v2, which uses a different workflow engine. Data science pipelines 2.0 is enabled and deployed by default in OpenShift AI.
Starting with OpenShift AI 2.16, data science pipelines 1.0 resources are no longer supported or managed by OpenShift AI. It is no longer possible to deploy, view, or edit the details of pipelines that are based on data science pipelines 1.0 from either the dashboard or the KFP API server.
OpenShift AI does not automatically migrate existing data science pipelines 1.0 instances to 2.0. If you are upgrading to OpenShift AI 2.16, you must manually migrate your existing data science pipelines 1.0 instances. For more information, see Upgrading to data science pipelines 2.0.
ImportantData science pipelines 2.0 contains an installation of Argo Workflows. OpenShift AI does not support direct customer usage of this installation of Argo Workflows. To install or upgrade to OpenShift AI 2.16 with data science pipelines 2.0, ensure that there is no existing installation of Argo Workflows on your cluster.
6.1.4. Functionality removed in earlier releases Copy linkLink copied to clipboard!
- HabanaAI workbench image removal
- Support for the HabanaAI 1.10 workbench image has been removed. New installations of OpenShift AI from version 2.14 do not include the HabanaAI workbench image. However, if you upgrade OpenShift AI from a previous version, the HabanaAI workbench image remains available, and existing HabanaAI workbench images continue to function.
- Pipeline logs for Python scripts running in Elyra pipelines are no longer stored in S3
Logs are no longer stored in S3-compatible storage for Python scripts which are running in Elyra pipelines. From OpenShift AI version 2.11, you can view these logs in the pipeline log viewer in the OpenShift AI dashboard.
NoteFor this change to take effect, you must use the Elyra runtime images provided in the 2024.1 or 2024.2 workbench images.
If you have an older workbench image version, update the Version selection field to
2024.1or2024.2, as described in Updating a project workbench.Updating your workbench image version will clear any existing runtime image selections for your pipeline. After you have updated your workbench version, open your workbench IDE and update the properties of your pipeline to select a runtime image.
- Embedded subscription channel no longer used
-
Starting with OpenShift AI 2.8, the
embeddedsubscription channel is no longer used. You can no longer select theembeddedchannel for a new installation of the Operator. For more information about subscription channels, see Installing the Red Hat OpenShift AI Operator.
- Version 1.2 notebook container images for workbenches are no longer supported
- When you create a workbench, you specify a notebook container image to use with the workbench. Starting with OpenShift AI 2.5, when you create a new workbench, version 1.2 notebook container images are not available to select. Workbenches that are already running with a version 1.2 notebook image continue to work normally. However, Red Hat recommends that you update your workbench to use the latest notebook container image.
- Beta subscription channel no longer used
-
Starting with OpenShift AI 2.5, the
betasubscription channel is no longer used. You can no longer select thebetachannel for a new installation of the Operator. For more information about subscription channels, see Installing the Red Hat OpenShift AI Operator.
6.2. Deprecated functionality Copy linkLink copied to clipboard!
- Deprecated cluster configuration parameters
When using the CodeFlare SDK to run distributed workloads in Red Hat OpenShift AI, the following parameters in the Ray cluster configuration are now deprecated and should be replaced with the new parameters as indicated.
Expand Deprecated parameter Replaced by head_cpushead_cpu_requests,head_cpu_limitshead_memoryhead_memory_requests,head_memory_limitsmin_cpusworker_cpu_requestsmax_cpusworker_cpu_limitsmin_memoryworker_memory_requestsmax_memoryworker_memory_limitshead_gpushead_extended_resource_requestsnum_gpusworker_extended_resource_requestsYou can also use the new
extended_resource_mappingandoverwrite_default_resource_mappingparameters, as appropriate. For more information about these new parameters, see the CodeFlare SDK documentation (external).
6.3. Planned support removal Copy linkLink copied to clipboard!
- Admin groups restructure
-
Currently, cluster administrators can configure OpenShift AI administration access to the dashboard by using the
groupsConfigoption in the OpenShiftOdhDashboardConfigcustom resource (CR). In an upcoming release, OpenShift AI administration functionality will be moved from theOdhDashboardConfigCR to a new CR on the OpenShift cluster. The purpose of this notice is to let you know in advance that your workflows (for example, GitOps workflows) that include interactions with theOdhDashboardConfigCR will likely be impacted by this change. Details of this change will be provided when available.