Chapter 6. Upgrading from version 1 of OpenShift Data Science
As a cluster administrator, you can configure either automatic or manual upgrade from version 1 to version 2 of the Red Hat OpenShift Data Science Operator.
6.1. Overview of upgrading OpenShift Data Science self-managed
As a cluster administrator, you can configure either automatic or manual upgrades for the Red Hat OpenShift Data Science Operator.
- If you configure automatic upgrades, when a new version of the Red Hat OpenShift Data Science Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention.
If you configure manual upgrades, when a new version of the Red Hat OpenShift Data Science Operator is available, OLM creates an update request.
A cluster administrator must manually approve the update request to update the Operator to the new version. See Manually approving a pending Operator upgrade for more information about approving a pending Operator upgrade.
- By default, the Red Hat OpenShift Data Science Operator follows a sequential update process. This means that if there are several versions between the current version and the version that you plan to upgrade to, Operator Lifecycle Manager (OLM) upgrades the Operator to each of the intermediate versions before it upgrades it to the final, target version. If you configure automatic upgrades, OLM automatically upgrades the Operator to the latest available version, without human intervention. If you configure manual upgrades, a cluster administrator must manually approve each sequential update between the current version and the final, target version.
- When you upgrade OpenShift Data Science, the upgrade process reloads any models that you previously deployed using ModelMesh, which is the default component for model-serving. The models are not available until this reload is complete.
- Red Hat supports the current release version and three previous release versions of OpenShift Data Science self-managed. For more information, see the Red Hat OpenShift Data Science self-managed Life Cycle knowledgebase article.
Additional resources
6.2. Configuring the upgrade strategy for OpenShift Data Science
As a cluster administrator, you can configure either an automatic or manual upgrade strategy for the Red Hat OpenShift Data Science Operator.
- By default, the Red Hat OpenShift Data Science Operator follows a sequential update process. This means that if there are several versions between the current version and the version that you intend to upgrade to, Operator Lifecycle Manager (OLM) upgrades the Operator to each of the intermediate versions before it upgrades it to the final, target version. If you configure automatic upgrades, OLM automatically upgrades the Operator to the latest available version, without human intervention. If you configure manual upgrades, a cluster administrator must manually approve each sequential update between the current version and the final, target version.
- When you upgrade OpenShift Data Science, the upgrade process reloads any models that you previously deployed using ModelMesh, which is the default component for model-serving. The models are not available until this reload is complete.
Prerequisites
- You have cluster administrator privileges for your OpenShift Container Platform cluster.
- The Red Hat OpenShift Data Science Operator is installed.
Procedure
- Log in to the OpenShift Container Platform cluster web console as a cluster administrator.
-
In the Administrator perspective, in the left menu, select Operators
Installed Operators. - Click the Red Hat OpenShift Data Science Operator.
- Click the Subscription tab.
Under Update approval, click the pencil icon and select one of the following update strategies:
-
Automatic
: New updates are installed as soon as they become available. -
Manual
: A cluster administrator must approve any new update before installation begins.
-
- Click Save.
Additional resources
- For more information about the subscription channels that are available in version 2 of the Red Hat OpenShift Data Science Operator, see Installing the Red Hat OpenShift Data Science Operator.
- For more information about upgrading Operators that have been installed using OLM, see Updating installed Operators in the OpenShift Container Platform documentation.
6.3. Cleaning up unused resources from version 1 of Red Hat OpenShift Data Science
Version 1 of OpenShift Data Science created a set of Kubeflow Deployment Definition (that is, KfDef
) custom resource instances on your OpenShift Container Platform cluster for various components of OpenShift Data Science. When you upgrade to version 2, these resources are no longer used and require manual removal from your cluster. The following procedures shows how to remove unused KfDef
instances from your cluster by using both the OpenShift command-line interface (CLI) and the web console.
6.3.1. Removing unused resources by using the CLI
The following procedure shows how to remove unused KfDef
instances from the redhat-ods-applications
, redhat-ods-monitoring
, and rhods-notebooks
projects in your OpenShift Container Platform cluster by using the OpenShift command-line interface (CLI). These resources become unused after you upgrade from version 1 to version 2 of OpenShift Data Science.
Prerequisites
- You upgraded from version 1 to version 2 of OpenShift Data Science.
- You have cluster administrator privileges for your OpenShift Container Platform cluster.
Procedure
- Open a new terminal window.
In the OpenShift command-line interface (CLI), log in to your on your OpenShift Container Platform cluster as a cluster administrator, as shown in the following example:
$ oc login <openshift_cluster_url> -u system:admin
Delete any
KfDef
instances that exist in theredhat-ods-applications
project.$ oc delete kfdef --all -n redhat-ods-applications --ignore-not-found || true
For any
KfDef
instance that is deleted, the output is similar to the following example:kfdef.kfdef.apps.kubeflow.org "rhods-dashboard" deleted
TipIf deletion of a
KfDef
instance fails to finish, you can force deletion of the object using the information in the "Force individual object removal when it has finalizers" section of the following Red Hat solution article: https://access.redhat.com/solutions/4165791.Delete any
KfDef
instances in theredhat-ods-monitoring
andrhods-notebooks
projects by entering the following commands:$ oc delete kfdef --all -n redhat-ods-monitoring --ignore-not-found || true $ oc delete kfdef --all -n rhods-notebooks --ignore-not-found || true
Verification
Check whether all
KfDef
instances have been removed from theredhat-ods-applications
,redhat-ods-monitoring
, andrhods-notebooks
projects.$ oc get kfdef --all-namespaces
Verify that you see no
KfDef
instances listed in theredhat-ods-applications
,redhat-ods-monitoring
, orrhods-notebooks
projects.
6.3.2. Removing unused resources by using the web console
The following procedure shows how to remove unused KfDef
instances from the redhat-ods-applications
, redhat-ods-monitoring
, and rhods-notebooks
projects in your OpenShift Container Platform cluster by using the OpenShift web console. These resources become unused after you upgrade from version 1 to version 2 of OpenShift Data Science.
Prerequisites
- You upgraded from version 1 to version 2 of OpenShift Data Science.
- You have cluster administrator privileges for your OpenShift Container Platform cluster.
Procedure
- Log in to the OpenShift Container Platform web console as a cluster administrator.
-
In the web console, click Administration
CustomResourceDefinitions. -
On the CustomResourceDefinitions page, click the
KfDef
custom resource definition (CRD). Click the
Instances
tab.The page shows all
KfDef
instances on the cluster.-
Take note of any
KfDef
instances that exist in theredhat-ods-applications
,redhat-ods-monitoring
, andrhods-notebooks
projects. These are the projects that you will clean up in the remainder of this procedure. -
To delete a
KfDef
instance from theredhat-ods-applications
,redhat-ods-monitoring
, orrhods-notebooks
project, click the action menu (⋮) beside the instance and select Delete KfDef from the list. To confirm deletion of the instance, click Delete.
TipIf deletion of a
KfDef
instance fails to finish, you can force deletion of the object using the information in the "Force individual object removal when it has finalizers" section of the following Red Hat solution article: https://access.redhat.com/solutions/4165791.-
Repeat the preceding steps to delete all remaining
KfDef
instances that you see in theredhat-ods-applications
,redhat-ods-monitoring
, andrhods-notebooks
projects.