Search

Chapter 3. Requirements for upgrading OpenShift AI

download PDF

This section describes the tasks that you should complete when upgrading OpenShift AI.

Check the components in the DataScienceCluster object

When you upgrade Red Hat OpenShift AI, the upgrade process automatically uses the values from the previous DataScienceCluster object.

After the upgrade, you should inspect the DataScienceCluster object and optionally update the status of any components as described in Updating the installation status of Red Hat OpenShift AI components by using the web console.

Recreate existing pipeline runs

When you upgrade to a newer version, any existing pipeline runs that you created in the previous version continue to refer to the image for the previous version (as expected).

You must delete the pipeline runs (not the pipelines) and create new pipeline runs. The pipeline runs that you create in the newer version correctly refer to the image for the newer version.

For more information on pipeline runs, see Managing pipeline runs.

Upgrading to data science pipelines 2.0

Previously, data science pipelines in OpenShift AI were based on KubeFlow Pipelines v1. It is no longer possible to deploy, view, or edit the details of pipelines that are based on data science pipelines 1.0 from the dashboard in OpenShift AI 2.11. If you already use data science pipelines, Red Hat recommends that you stay on OpenShift AI 2.8 until full feature parity in data science pipelines 2.0 has been delivered in a stable OpenShift AI release and you are ready to migrate to the new pipeline solution.

Data science pipelines 2.0 contains an installation of Argo Workflows. OpenShift AI does not support direct customer usage of this installation of Argo Workflows. To install or upgrade to OpenShift AI 2.9 or later with data science pipelines 2.0, ensure that there is no existing installation of Argo Workflows on your cluster.

If you want to use existing pipelines and workbenches with data science pipelines 2.0 after upgrading to OpenShift AI 2.11, you must update your workbenches to use the 2024.1 notebook image version and then manually migrate your pipelines from data science pipelines 1.0 to 2.0. For more information, see Upgrading to data science pipelines 2.0.

Address KServe requirements

For the KServe component, which is used by the single-model serving platform to serve large models, you must meet the following requirements:

  • To fully install and use KServe, you must also install Operators for Red Hat OpenShift Serverless and Red Hat OpenShift Service Mesh and perform additional configuration. For more information, see Serving large models.
  • If you want to add an authorization provider for the single-model serving platform, you must install the Red Hat - Authorino Operator. For information, see Adding an authorization provider for the single-model serving platform.
  • If you have not enabled the KServe component (that is, you set the value of the managementState field to Removed in the DataScienceCluster object), you must also disable the dependent Service Mesh component to avoid errors. See Disabling KServe dependencies.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.