Chapter 3. Requirements for upgrading OpenShift AI
This section describes the tasks that you should complete when upgrading OpenShift AI.
Check the components in the DataScienceCluster
object
When you upgrade Red Hat OpenShift AI, the upgrade process automatically uses the values from the previous DataScienceCluster
object.
After the upgrade, you should inspect the DataScienceCluster
object and optionally update the status of any components as described in Updating the installation status of Red Hat OpenShift AI components by using the web console.
New components are not automatically added to the DataScienceCluster
object during upgrade. If you want to use a new component, you must manually edit the DataScienceCluster
object to add the component entry.
Upgrading to data science pipelines 2.0
Previously, data science pipelines in OpenShift AI were based on KubeFlow Pipelines v1. Data science pipelines are now based on KubeFlow Pipelines v2, which uses a different workflow engine. Data science pipelines 2.0 is enabled and deployed by default in OpenShift AI.
Data science pipelines 1.0 resources are no longer supported or managed by OpenShift AI. It is no longer possible to deploy, view, or edit the details of pipelines that are based on data science pipelines 1.0 from either the dashboard or the KFP API server.
OpenShift AI does not automatically migrate existing data science pipelines 1.0 instances to 2.0. Before upgrading OpenShift AI, you must manually migrate your existing data science pipelines 1.0 instances. For more information, see Migrating to data science pipelines 2.0.
Data science pipelines 2.0 contains an installation of Argo Workflows. OpenShift AI does not support direct usage of this installation of Argo Workflows.
If you upgrade to OpenShift AI with data science pipelines 2.0 and an Argo Workflows installation that is not installed by data science pipelines exists on your cluster, OpenShift AI components will not be upgraded. To complete the component upgrade, disable data science pipelines or remove the separate installation of Argo Workflows. The component upgrade will complete automatically.
Address KServe requirements
For the KServe component, which is used by the single-model serving platform to serve large models, you must meet the following requirements:
- To fully install and use KServe, you must also install Operators for Red Hat OpenShift Serverless and Red Hat OpenShift Service Mesh and perform additional configuration. For more information, see Serving large models.
-
If you want to add an authorization provider for the single-model serving platform, you must install the
Red Hat - Authorino
Operator. For information, see Adding an authorization provider for the single-model serving platform.