Chapter 6. Support removals


This section describes major changes in support for user-facing features in Red Hat OpenShift AI. For information about OpenShift AI supported software platforms, components, and dependencies, see Supported configurations.

6.1. Removed functionality

Anaconda removal

Anaconda is an open source distribution of the Python and R programming languages. In Red Hat OpenShift AI 2.16.2, Anaconda is no longer included in OpenShift AI, and Anaconda resources are no longer supported or managed by OpenShift AI.

If you previously installed Anaconda from OpenShift AI, complete the following steps to remove the Anaconda-related artifacts:

  1. Remove the secret that contains your Anaconda password:

    oc delete secret -n redhat-ods-applications anaconda-ce-access
    Copy to Clipboard Toggle word wrap
  2. Remove the ConfigMap for the Anaconda validation cronjob:

    oc delete configmap -n redhat-ods-applications anaconda-ce-validation-result
    Copy to Clipboard Toggle word wrap
  3. Remove the Anaconda image stream:

    oc delete imagestream -n redhat-ods-applications s2i-minimal-notebook-anaconda
    Copy to Clipboard Toggle word wrap
  4. Remove the Anaconda job that validated the downloading of images:

    oc delete job -n redhat-ods-applications anaconda-ce-periodic-validator-job-custom-run
    Copy to Clipboard Toggle word wrap
  5. Remove any pods related to Anaconda cronjob runs:

    oc get pods n redhat-ods-applications --no-headers=true | awk '/anaconda-ce-periodic-validator-job-custom-run*/'
    Copy to Clipboard Toggle word wrap

No functionality removed in this release.

Data science pipelines v1 support removed

Previously, data science pipelines in OpenShift AI were based on KubeFlow Pipelines v1. Starting with OpenShift AI 2.9, data science pipelines are based on KubeFlow Pipelines v2, which uses a different workflow engine. Data science pipelines 2.0 is enabled and deployed by default in OpenShift AI.

Starting with OpenShift AI 2.16, data science pipelines 1.0 resources are no longer supported or managed by OpenShift AI. It is no longer possible to deploy, view, or edit the details of pipelines that are based on data science pipelines 1.0 from either the dashboard or the KFP API server.

OpenShift AI does not automatically migrate existing data science pipelines 1.0 instances to 2.0. If you are upgrading to OpenShift AI 2.16, you must manually migrate your existing data science pipelines 1.0 instances. For more information, see Upgrading to data science pipelines 2.0.

Important

Data science pipelines 2.0 contains an installation of Argo Workflows. OpenShift AI does not support direct customer usage of this installation of Argo Workflows. To install or upgrade to OpenShift AI 2.16 with data science pipelines 2.0, ensure that there is no existing installation of Argo Workflows on your cluster.

6.1.4. Functionality removed in earlier releases

HabanaAI workbench image removal
Support for the HabanaAI 1.10 workbench image has been removed. New installations of OpenShift AI from version 2.14 do not include the HabanaAI workbench image. However, if you upgrade OpenShift AI from a previous version, the HabanaAI workbench image remains available, and existing HabanaAI workbench images continue to function.
Pipeline logs for Python scripts running in Elyra pipelines are no longer stored in S3

Logs are no longer stored in S3-compatible storage for Python scripts which are running in Elyra pipelines. From OpenShift AI version 2.11, you can view these logs in the pipeline log viewer in the OpenShift AI dashboard.

Note

For this change to take effect, you must use the Elyra runtime images provided in the 2024.1 or 2024.2 workbench images.

If you have an older workbench image version, update the Version selection field to 2024.1 or 2024.2, as described in Updating a project workbench.

Updating your workbench image version will clear any existing runtime image selections for your pipeline. After you have updated your workbench version, open your workbench IDE and update the properties of your pipeline to select a runtime image.

Embedded subscription channel no longer used
Starting with OpenShift AI 2.8, the embedded subscription channel is no longer used. You can no longer select the embedded channel for a new installation of the Operator. For more information about subscription channels, see Installing the Red Hat OpenShift AI Operator.
Version 1.2 notebook container images for workbenches are no longer supported
When you create a workbench, you specify a notebook container image to use with the workbench. Starting with OpenShift AI 2.5, when you create a new workbench, version 1.2 notebook container images are not available to select. Workbenches that are already running with a version 1.2 notebook image continue to work normally. However, Red Hat recommends that you update your workbench to use the latest notebook container image.
Beta subscription channel no longer used
Starting with OpenShift AI 2.5, the beta subscription channel is no longer used. You can no longer select the beta channel for a new installation of the Operator. For more information about subscription channels, see Installing the Red Hat OpenShift AI Operator.

6.2. Deprecated functionality

Deprecated cluster configuration parameters

When using the CodeFlare SDK to run distributed workloads in Red Hat OpenShift AI, the following parameters in the Ray cluster configuration are now deprecated and should be replaced with the new parameters as indicated.

Expand
Deprecated parameterReplaced by

head_cpus

head_cpu_requests, head_cpu_limits

head_memory

head_memory_requests, head_memory_limits

min_cpus

worker_cpu_requests

max_cpus

worker_cpu_limits

min_memory

worker_memory_requests

max_memory

worker_memory_limits

head_gpus

head_extended_resource_requests

num_gpus

worker_extended_resource_requests

You can also use the new extended_resource_mapping and overwrite_default_resource_mapping parameters, as appropriate. For more information about these new parameters, see the CodeFlare SDK documentation (external).

6.3. Planned support removal

Admin groups restructure
Currently, cluster administrators can configure OpenShift AI administration access to the dashboard by using the groupsConfig option in the OpenShift OdhDashboardConfig custom resource (CR). In an upcoming release, OpenShift AI administration functionality will be moved from the OdhDashboardConfig CR to a new CR on the OpenShift cluster. The purpose of this notice is to let you know in advance that your workflows (for example, GitOps workflows) that include interactions with the OdhDashboardConfig CR will likely be impacted by this change. Details of this change will be provided when available.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat