Search

Chapter 7. Installing and managing Red Hat OpenShift Data Science components

download PDF

The following procedures show how to use the command-line interface (CLI) and OpenShift Container Platform web console to install and manage components of Red Hat OpenShift Data Science on your OpenShift Container Platform cluster.

7.1. Installing Red Hat OpenShift Data Science components by using the CLI

The following procedure shows how to use the OpenShift command-line interface (CLI) to install specific components of Red Hat OpenShift Data Science on your OpenShift Container Platform cluster.

Important

The following procedure describes how to create and configure a DataScienceCluster object to install Red Hat OpenShift Data Science components as part of a new installation. However, if you upgraded from version 1 of OpenShift Data Science, the upgrade process automatically created a default DataScienceCluster object. To inspect the default DataScienceCluster object and change the installation status of Red Hat OpenShift Data Science components, see Updating the installation status of Red Hat OpenShift Data Science components by using the web console.

Prerequisites

Procedure

  1. Open a new terminal window.
  2. In the OpenShift command-line interface (CLI), log in to your on your OpenShift Container Platform cluster as a cluster administrator, as shown in the following example:

    $ oc login <openshift_cluster_url> -u <admin_username> -p <password>
  3. Create a DataScienceCluster object custom resource (CR) file, for example, rhods-operator-dsc.yaml.

    apiVersion: datasciencecluster.opendatahub.io/v1
    kind: DataScienceCluster
    metadata:
      name: default-dsc
    spec:
      components:
        codeflare:
          managementState: "Removed"
        dashboard:
          managementState: "Removed"
        datasciencepipelines:
          managementState: "Removed"
        kserve:
          managementState: "Removed"
        modelmeshserving:
          managementState: "Removed"
        ray:
          managementState: "Removed"
        workbenches:
          managementState: "Removed"
  4. In the spec.components section of the CR, for each OpenShift Data Science component shown, set the value of the managementState field to either Managed or Removed. These values are defined as follows:

    Managed
    The Operator actively manages the component, installs it, and tries to keep it active. The Operator will upgrade the component only if it is safe to do so.
    Removed
    The Operator actively manages the component but does not install it. If the component is already installed, the Operator will try to remove it.
    Important
    • The KServe component is part of a composite model-serving stack that is a Limited Availability feature. Limited Availability means that you can install and receive support for the feature only with specific approval from Red Hat. Without such approval, the feature is unsupported. Before enabling the KServe component, you must install some other, dependent Operators, including the Red Hat OpenShift Serverless and Red Hat OpenShift Service Mesh Operators. For more information, see Serving large language models.
    • You cannot install both the KServe and Model Mesh Serving components.
    • The CodeFlare and KubeRay components are Technology Preview features only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
    • To learn how to configure the distributed workloads feature that uses the CodeFlare and KubeRay components, see Configuring distributed workloads.
  5. Create the DataScienceCluster object in your OpenShift Container Platform cluster to install the specified OpenShift Data Science components.

    $ oc create -f rhods-operator-dsc.yaml

    You see output similar to the following:

    datasciencecluster.datasciencecluster.opendatahub.io/default created

Verification

  • In the OpenShift Container Platform web console, click Workloads Pods. In the Project list at the top of the page, select redhat-ods-applications. In the applications namespace, confirm that there are running pods for each of the OpenShift Data Science components that you installed.
  • In the web console, click Operators Installed Operators and then perform the following actions:

    • Click the Red Hat OpenShift Data Science Operator.
    • Click the Data Science Cluster tab and select the DataScienceCluster object called default-dsc.
    • Select the YAML tab.
    • In the installedComponents section, confirm that the components you installed have a status value of true.

7.2. Installing Red Hat OpenShift Data Science components by using the web console

The following procedure shows how to use the OpenShift Container Platform web console to install specific components of Red Hat OpenShift Data Science on your cluster.

Important

The following procedure describes how to create and configure a DataScienceCluster object to install Red Hat OpenShift Data Science components as part of a new installation. However, if you upgraded from version 1 of OpenShift Data Science, the upgrade process automatically created a default DataScienceCluster object. To inspect the default DataScienceCluster object and change the installation status of Red Hat OpenShift Data Science components, see Updating the installation status of Red Hat OpenShift Data Science components by using the web console.

Prerequisites

Procedure

  1. Log in to the OpenShift Container Platform web console as a cluster administrator.
  2. In the web console, click Operators Installed Operators and then click the Red Hat OpenShift Data Science Operator.
  3. Create a DataScienceCluster object to install OpenShift Data Science components by performing the following actions:

    1. Click the Data Science Cluster tab.
    2. Click Create DataScienceCluster.
    3. For Configure via, select YAML view.

      An embedded YAML editor opens showing a default custom resource (CR) for the DataScienceCluster object.

    4. In the spec.components section of the CR, for each OpenShift Data Science component shown, set the value of the managementState field to either Managed or Removed. These values are defined as follows:

      Managed
      The Operator actively manages the component, installs it, and tries to keep it active. The Operator will upgrade the component only if it is safe to do so.
      Removed
      The Operator actively manages the component but does not install it. If the component is already installed, the Operator will try to remove it.
      Important
      • The KServe component is part of a composite model-serving stack that is a Limited Availability feature. Limited Availability means that you can install and receive support for the feature only with specific approval from Red Hat. Without such approval, the feature is unsupported. Before enabling this component, you must install some other, dependent Operators, including the Red Hat OpenShift Serverless and Red Hat OpenShift Service Mesh Operators. For more information, see Serving large language models.
      • You cannot install both the KServe and Model Mesh Serving components.
      • The CodeFlare and KubeRay components are Technology Preview features only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
      • To learn how to configure the distributed workloads feature that uses the CodeFlare and KubeRay components, see Configuring distributed workloads.
  4. Click Create.

Verification

  • On the DataScienceClusters page, click the default-dsc object and then perform the following actions:

    • Select the YAML tab.
    • In the installedComponents section, confirm that the components you installed have a status value of true.
  • In the OpenShift Container Platform web console, click Workloads Pods and then perform the following actions:

    • In the Project list at the top of the page, select the redhat-ods-applications project.
    • In the project, confirm that there are running pods for each of the OpenShift Data Science components that you installed.

7.3. Updating the installation status of Red Hat OpenShift Data Science components by using the web console

The following procedure shows how to use the OpenShift Container Platform web console to update the installation status of components of Red Hat OpenShift Data Science on your OpenShift Container Platform cluster.

Important

If you upgraded from version 1 to version 2 of OpenShift Data Science, the upgrade process automatically created a default DataScienceCluster object and enabled several components of OpenShift Data Science. In this case, the following procedure shows how to do the following:

  • Change the installation status of the existing Red Hat OpenShift Data Science components
  • Add additional components to the DataScienceCluster object that were not available in version 1 of OpenShift Data Science

Prerequisites

  • The Red Hat OpenShift Data Science Operator is installed on your OpenShift Container Platform cluster.
  • You have cluster administrator privileges for your OpenShift Container Platform cluster.

Procedure

  1. Log in to the OpenShift Container Platform web console as a cluster administrator.
  2. In the web console, click Operators Installed Operators and then click the Red Hat OpenShift Data Science Operator.
  3. Click the Data Science Cluster tab.
  4. On the DataScienceClusters page, click the default object.
  5. Click the YAML tab.

    An embedded YAML editor opens showing the custom resource (CR) file for the DataScienceCluster object.

  6. In the spec.components section of the CR, for each OpenShift Data Science component shown, set the value of the managementState field to either Managed or Removed. These values are defined as follows:

    Managed
    The Operator actively manages the component, installs it, and tries to keep it active. The Operator will upgrade the component only if it is safe to do so.
    Removed
    The Operator actively manages the component but does not install it. If the component is already installed, the Operator will try to remove it.
    Important
    • If they are not already present in the CR file, you can install the CodeFlare, KServe, and KubeRay features by adding components called codeflare, kserve, and ray to the spec.components section of the CR and setting the managementState field for the components to Managed.
    • The KServe component is part of a composite model-serving stack that is a Limited Availability feature. Limited Availability means that you can install and receive support for the feature only with specific approval from Red Hat. Without such approval, the feature is unsupported. Before enabling this component, you must install some other, dependent Operators, including the Red Hat OpenShift Serverless and Red Hat OpenShift Service Mesh Operators. For more information, see Serving large language models.
    • You cannot install both the KServe and Model Mesh Serving components.
    • The CodeFlare and KubeRay components are Technology Preview features only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
    • To learn how to configure the distributed workloads feature that uses the CodeFlare and KubeRay components, see Configuring distributed workloads.
  7. Click Save.

Verification

  • On the DataScienceClusters page, click the default-dsc object and then perform the following actions:

    • Select the YAML tab.
    • In the installedComponents section, confirm that the components you installed have a status value of true.
  • In the OpenShift Container Platform web console, click Workloads Pods. In the Project list at the top of the page, select redhat-ods-applications. In the applications namespace, confirm that there are running pods for each of the OpenShift Data Science components that you have installed.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.