Search

Chapter 6. Upgrading from version 1 of OpenShift Data Science

download PDF

As a cluster administrator, you can configure either automatic or manual upgrade from version 1 to version 2 of the Red Hat OpenShift Data Science Operator.

6.1. Overview of upgrading OpenShift Data Science self-managed

As a cluster administrator, you can configure either automatic or manual upgrades for the Red Hat OpenShift Data Science Operator.

  • If you configure automatic upgrades, when a new version of the Red Hat OpenShift Data Science Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention.
  • If you configure manual upgrades, when a new version of the Red Hat OpenShift Data Science Operator is available, OLM creates an update request.

    A cluster administrator must manually approve the update request to update the Operator to the new version. See Manually approving a pending Operator upgrade for more information about approving a pending Operator upgrade.

  • By default, the Red Hat OpenShift Data Science Operator follows a sequential update process. This means that if there are several versions between the current version and the version that you plan to upgrade to, Operator Lifecycle Manager (OLM) upgrades the Operator to each of the intermediate versions before it upgrades it to the final, target version. If you configure automatic upgrades, OLM automatically upgrades the Operator to the latest available version, without human intervention. If you configure manual upgrades, a cluster administrator must manually approve each sequential update between the current version and the final, target version.
  • When you upgrade OpenShift Data Science, the upgrade process reloads any models that you previously deployed using ModelMesh, which is the default component for model-serving. The models are not available until this reload is complete.
  • Red Hat supports the current release version and three previous release versions of OpenShift Data Science self-managed. For more information, see the Red Hat OpenShift Data Science self-managed Life Cycle knowledgebase article.

6.2. Configuring the upgrade strategy for OpenShift Data Science

As a cluster administrator, you can configure either an automatic or manual upgrade strategy for the Red Hat OpenShift Data Science Operator.

Important
  • By default, the Red Hat OpenShift Data Science Operator follows a sequential update process. This means that if there are several versions between the current version and the version that you intend to upgrade to, Operator Lifecycle Manager (OLM) upgrades the Operator to each of the intermediate versions before it upgrades it to the final, target version. If you configure automatic upgrades, OLM automatically upgrades the Operator to the latest available version, without human intervention. If you configure manual upgrades, a cluster administrator must manually approve each sequential update between the current version and the final, target version.
  • When you upgrade OpenShift Data Science, the upgrade process reloads any models that you previously deployed using ModelMesh, which is the default component for model-serving. The models are not available until this reload is complete.

Prerequisites

  • You have cluster administrator privileges for your OpenShift Container Platform cluster.
  • The Red Hat OpenShift Data Science Operator is installed.

Procedure

  1. Log in to the OpenShift Container Platform cluster web console as a cluster administrator.
  2. In the Administrator perspective, in the left menu, select Operators Installed Operators.
  3. Click the Red Hat OpenShift Data Science Operator.
  4. Click the Subscription tab.
  5. Under Update approval, click the pencil icon and select one of the following update strategies:

    • Automatic: New updates are installed as soon as they become available.
    • Manual: A cluster administrator must approve any new update before installation begins.
  6. Click Save.

Additional resources

6.3. Cleaning up unused resources from version 1 of Red Hat OpenShift Data Science

Version 1 of OpenShift Data Science created a set of Kubeflow Deployment Definition (that is, KfDef) custom resource instances on your OpenShift Container Platform cluster for various components of OpenShift Data Science. When you upgrade to version 2, these resources are no longer used and require manual removal from your cluster. The following procedures shows how to remove unused KfDef instances from your cluster by using both the OpenShift command-line interface (CLI) and the web console.

6.3.1. Removing unused resources by using the CLI

The following procedure shows how to remove unused KfDef instances from the redhat-ods-applications, redhat-ods-monitoring, and rhods-notebooks projects in your OpenShift Container Platform cluster by using the OpenShift command-line interface (CLI). These resources become unused after you upgrade from version 1 to version 2 of OpenShift Data Science.

Prerequisites

  • You upgraded from version 1 to version 2 of OpenShift Data Science.
  • You have cluster administrator privileges for your OpenShift Container Platform cluster.

Procedure

  1. Open a new terminal window.
  2. In the OpenShift command-line interface (CLI), log in to your on your OpenShift Container Platform cluster as a cluster administrator, as shown in the following example:

    $ oc login <openshift_cluster_url> -u system:admin
  3. Delete any KfDef instances that exist in the redhat-ods-applications project.

    $ oc delete kfdef --all -n redhat-ods-applications --ignore-not-found || true

    For any KfDef instance that is deleted, the output is similar to the following example:

    kfdef.kfdef.apps.kubeflow.org "rhods-dashboard" deleted
    Tip

    If deletion of a KfDef instance fails to finish, you can force deletion of the object using the information in the "Force individual object removal when it has finalizers" section of the following Red Hat solution article: https://access.redhat.com/solutions/4165791.

  4. Delete any KfDef instances in the redhat-ods-monitoring and rhods-notebooks projects by entering the following commands:

    $ oc delete kfdef --all -n redhat-ods-monitoring --ignore-not-found || true
    $ oc delete kfdef --all -n rhods-notebooks --ignore-not-found || true

Verification

  • Check whether all KfDef instances have been removed from the redhat-ods-applications, redhat-ods-monitoring, and rhods-notebooks projects.

    $ oc get kfdef --all-namespaces

    Verify that you see no KfDef instances listed in the redhat-ods-applications, redhat-ods-monitoring, or rhods-notebooks projects.

6.3.2. Removing unused resources by using the web console

The following procedure shows how to remove unused KfDef instances from the redhat-ods-applications, redhat-ods-monitoring, and rhods-notebooks projects in your OpenShift Container Platform cluster by using the OpenShift web console. These resources become unused after you upgrade from version 1 to version 2 of OpenShift Data Science.

Prerequisites

  • You upgraded from version 1 to version 2 of OpenShift Data Science.
  • You have cluster administrator privileges for your OpenShift Container Platform cluster.

Procedure

  1. Log in to the OpenShift Container Platform web console as a cluster administrator.
  2. In the web console, click Administration CustomResourceDefinitions.
  3. On the CustomResourceDefinitions page, click the KfDef custom resource definition (CRD).
  4. Click the Instances tab.

    The page shows all KfDef instances on the cluster.

  5. Take note of any KfDef instances that exist in the redhat-ods-applications, redhat-ods-monitoring, and rhods-notebooks projects. These are the projects that you will clean up in the remainder of this procedure.
  6. To delete a KfDef instance from the redhat-ods-applications, redhat-ods-monitoring, or rhods-notebooks project, click the action menu (⋮) beside the instance and select Delete KfDef from the list.
  7. To confirm deletion of the instance, click Delete.

    Tip

    If deletion of a KfDef instance fails to finish, you can force deletion of the object using the information in the "Force individual object removal when it has finalizers" section of the following Red Hat solution article: https://access.redhat.com/solutions/4165791.

  8. Repeat the preceding steps to delete all remaining KfDef instances that you see in the redhat-ods-applications, redhat-ods-monitoring, and rhods-notebooks projects.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.