Search

Chapter 3. Deploying OpenShift AI in a disconnected environment

download PDF

Read this section to understand how to deploy Red Hat OpenShift AI as a development and testing environment for data scientists in a disconnected environment. Disconnected clusters are on a restricted network, typically behind a firewall. In this case, clusters cannot access the remote registries where Red Hat provided OperatorHub sources reside. Instead, the Red Hat OpenShift AI Operator can be deployed to a disconnected environment using a private registry to mirror the images.

Installing OpenShift AI in a disconnected environment involves the following high-level tasks:

  1. Confirm that your OpenShift cluster meets all requirements. See Requirements for OpenShift AI Self-Managed.
  2. Add administrative users for OpenShift. See Adding administrative users in OpenShift.
  3. Mirror images to a private registry. See Mirroring images to a private registry for a disconnected installation.
  4. Install the Red Hat OpenShift AI Operator. See Installing the Red Hat OpenShift AI Operator.
  5. Install OpenShift AI components. See Installing and managing Red Hat OpenShift AI components.
  6. Configure user and administrator groups to provide user access to OpenShift AI. See Adding users.
  7. Provide your users with the URL for the OpenShift cluster on which you deployed OpenShift AI. See Accessing the OpenShift AI dashboard.

3.1. Requirements for OpenShift AI Self-Managed

You must meet the following requirements before you can install Red Hat OpenShift AI on your Red Hat OpenShift cluster in a disconnected environment:

Product subscriptions

  • You must have a subscription for Red Hat OpenShift AI Self-Managed.

Contact your Red Hat account manager to purchase new subscriptions. If you do not yet have an account manager, complete the form at https://www.redhat.com/en/contact to request one.

Cluster administrator access to your OpenShift cluster

  • You must have an OpenShift cluster with cluster administrator access. Use an existing cluster or create a cluster by following the OpenShift Container Platform documentation: Disconnected installation mirroring.
  • After you install a cluster, configure the Cluster Samples Operator by following the OpenShift Container Platform documentation: Configuring Samples Operator for a restricted cluster.
  • Your cluster must have at least 2 worker nodes with at least 8 CPUs and 32 GiB RAM available for OpenShift AI to use when you install the Operator. To ensure that OpenShift AI is usable, additional cluster resources are required beyond the minimum requirements.
  • Your cluster is configured with a default storage class that can be dynamically provisioned.

    Confirm that a default storage class is configured by running the oc get storageclass command. If no storage classes are noted with (default) beside the name, follow the OpenShift Container Platform documentation to configure a default storage class: Changing the default storage class. For more information about dynamic provisioning, see Dynamic provisioning.

  • Open Data Hub must not be installed on the cluster.

For more information about managing the machines that make up an OpenShift cluster, see Overview of machine management.

An identity provider configured for OpenShift

  • Red Hat OpenShift AI uses the same authentication systems as Red Hat OpenShift Container Platform. See Understanding identity provider configuration for more information on configuring identity providers.
  • Access to the cluster as a user with the cluster-admin role; the kubeadmin user is not allowed.

Internet access on the mirroring machine

  • Along with Internet access, the following domains must be accessible to mirror images required for the OpenShift AI Self-Managed installation:

    • cdn.redhat.com
    • subscription.rhn.redhat.com
    • registry.access.redhat.com
    • registry.redhat.io
    • quay.io
  • For CUDA-based images, the following domains must be accessible:

    • ngc.download.nvidia.cn
    • developer.download.nvidia.com

Data science pipelines preparation

  • Data science pipelines 2.0 contains an installation of Argo Workflows. If there is an existing installation of Argo Workflows that is not installed by data science pipelines on your cluster, data science pipelines will be disabled after you install OpenShift AI. Before installing OpenShift AI, ensure that your cluster does not have an existing installation of Argo Workflows that is not installed by data science pipelines, or remove the separate installation of Argo Workflows from your cluster.
  • Before you can execute a pipeline in a disconnected environment, you must upload the images to your private registry. For more information, see Mirroring images to run pipelines in a restricted environment.
  • You can store your pipeline artifacts in an S3-compatible object storage bucket so that you do not consume local storage. To do this, you must first configure write access to your S3 bucket on your storage account.

Install KServe dependencies

  • To support the KServe component, which is used by the single-model serving platform to serve large models, you must also install Operators for Red Hat OpenShift Serverless and Red Hat OpenShift Service Mesh and perform additional configuration. For more information, see Serving large models.
  • If you want to add an authorization provider for the single-model serving platform, you must install the Red Hat - Authorino Operator. For information, see Adding an authorization provider for the single-model serving platform.

Access to object storage

3.2. Adding administrative users in OpenShift

Before you can install and configure OpenShift AI for your data scientist users, you must define administrative users. Only users with cluster administrator privileges can install and configure OpenShift AI.

You can create a cluster admin by following the steps in the relevant documentation:

3.3. Mirroring images to a private registry for a disconnected installation

You can install the Red Hat OpenShift AI Operator to your OpenShift cluster in a disconnected environment by mirroring the required container images to a private container registry. After mirroring the images to a container registry, you can install Red Hat OpenShift AI Operator using OperatorHub.

You can use the mirror registry for Red Hat OpenShift, a small-scale container registry that you can use as a target for mirroring the required container images for OpenShift AI in a disconnected environment. Use of the mirror registry for Red Hat OpenShift is optional if another container registry is already available in your installation environment.

Prerequisites

  • You have cluster administrator access to a running OpenShift Container Platform cluster, version 4.12 or greater.
  • You have credentials for Red Hat OpenShift Cluster Manager (https://console.redhat.com/openshift/).
  • Your mirroring machine is running Linux, has 100 GB of space available, and has access to the Internet so that it can obtain the images to populate the mirror repository.
  • You have installed the OpenShift CLI (oc).
  • If you plan to use NVIDIA GPUs, you have mirrored and deployed the NVIDIA GPU Operator. See Configuring the NVIDIA GPU Operator in the OpenShift Container Platform documentation.
  • If you plan to use data science pipelines, you have mirrored the OpenShift Pipelines operator.
  • If you plan to use the single-model serving platform to serve large models, you have mirrored the Operators for Red Hat OpenShift Serverless and Red Hat OpenShift Service Mesh. For more information, see Serving large models.
  • If you plan to use the distributed workloads component, you have mirrored the Ray cluster image.

Procedure

  1. Create a mirror registry. See Creating a mirror registry with mirror registry for Red Hat OpenShift in the OpenShift Container Platform documentation.
  2. To mirror registry images, install the oc-mirror OpenShift CLI plug-in (version 4.12 or greater) on your mirroring machine running Linux. See Installing the oc-mirror OpenShift CLI plug-in in the OpenShift Container Platform documentation.

    Important

    Versions of oc-mirror before version 4.12 do not allow you to mirror the full image set configuration provided.

  3. Create a container image registry credentials file that allows mirroring images from Red Hat to your mirror. See Configuring credentials that allow images to be mirrored in the OpenShift Container Platform documentation.
  4. Open the example image set configuration file (rhoai-<version>.md) from the disconnected installer helper repository and examine its contents.
  5. Using the example image set configuration file, create a file called imageset-config.yaml and populate it with values suitable for the image set configuration in your deployment.

    • To view a list of the available OpenShift versions, run the following command. This might take several minutes. If the command returns errors, repeat the steps in Configuring credentials that allow images to be mirrored.

      oc-mirror list operators
    • To see the available channels for a package in a specific version of OpenShift Container Platform (for example, 4.15), run the following command:

      oc-mirror list operators --catalog=registry.redhat.io/redhat/redhat-operator-index:v4.15 --package=<package-name>
    • For information about subscription update channels, see Understanding update channels.

      Important

      The example image set configurations are for demonstration purposes only and might need further alterations depending on your deployment.

      To identify the attributes most suitable for your deployment, examine the documentation and use cases in Mirroring images for a disconnected installation using the oc-mirror plugin.

      Your imageset-config.yaml should look similar to the following example, where openshift-pipelines-operator-rh is required for data science pipelines, and both serverless-operator and servicemeshoperator are required for the KServe component.

      kind: ImageSetConfiguration
      apiVersion: mirror.openshift.io/v1alpha2
      archiveSize: 4
      storageConfig:
        registry:
          imageURL: registry.example.com:5000/mirror/oc-mirror-metadata
          skipTLS: false
      mirror:
        operators:
          - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.15
            packages:
              - name: rhods-operator
                channels:
                  - name: stable
                    minVersion: 2.10.0
                    maxVersion: 2.10.0
              - name: openshift-pipelines-operator-rh
                channels:
                  - name: stable
              - name: serverless-operator
                channels:
                  - name: stable
              - name: servicemeshoperator
                channels:
                  - name: stable
  6. Download the specified image set configuration to a local file on your mirroring machine:

    • Replace mirror-rhoai with the target directory where you want to output the image set file.
    • The target directory path must start with file://.
    • The download might take several minutes.

      $ oc mirror --config=./imageset-config.yaml file://mirror-rhoai
      Tip

      If the tls: failed to verify certificate: x509: certificate signed by unknown authority error is returned and you want to ignore it, set skipTLS to true in your image set configuration file and run the command again.

  7. Verify that the image set .tar files were created:

    $ ls mirror-rhoai
    mirror_seq1_000000.tar mirror_seq1_000001.tar

    If an archiveSize value was specified in the image set configuration file, the image set might be separated into multiple .tar files.

  8. Optional: Verify that total size of the image set .tar files is around 75 GB:

    $ du -h --max-depth=1 ./mirror-rhoai/

    If the total size of the image set is significantly less than 75 GB, run the oc mirror command again.

  9. Upload the contents of the generated image set to your target mirror registry:

    • Replace mirror-rhoai with the directory that contains your image set .tar files.
    • Replace registry.example.com:5000 with your mirror registry.

      $ oc mirror --from=./mirror-rhoai docker://registry.example.com:5000
      Tip

      If the tls: failed to verify certificate: x509: certificate signed by unknown authority error is returned and you want to ignore it, run the following command:

      $ oc mirror --dest-skip-tls --from=./mirror-rhoai docker://registry.example.com:5000
  10. Log in to your target OpenShift cluster using the OpenShift CLI as a user with the cluster-admin role.
  11. Verify that the YAML files are present for the ImageContentSourcePolicy and CatalogSource resources:

    • Replace <results-directory> with the name of your results directory.

      $ ls oc-mirror-workspace/<results-directory>/
      
      catalogSource-cs-redhat-operator-index.yaml
      charts
      imageContentSourcePolicy.yaml
      mapping.txt
      release-signatures
  12. Install the generated ImageContentSourcePolicy and CatalogSource resources into the cluster:

    • Replace <results-directory> with the name of your results directory.

      $ oc apply -f ./oc-mirror-workspace/<results-directory>/imageContentSourcePolicy.yaml
      $ oc apply -f ./oc-mirror-workspace/<results-directory>/catalogSource-cs-redhat-operator-index.yaml

Verification

  • Verify that the CatalogSource and pod were created successfully:

    $ oc get catalogsource,pod -n openshift-marketplace

    This should return at least one catalog and two pods.

  • Check that the Red Hat OpenShift AI Operator exists in the OperatorHub:

    1. Log in to the OpenShift Container Platform cluster web console.
    2. Click Operators OperatorHub.

      The OperatorHub page opens.

    3. Confirm that the Red Hat OpenShift AI Operator is shown.
  • If you mirrored additional operators, such as OpenShift Pipelines, Red Hat OpenShift Serverless, or Red Hat OpenShift Service Mesh, check that those operators exist the OperatorHub.

3.4. Installing the Red Hat OpenShift AI Operator

This section shows how to install the Red Hat OpenShift AI Operator on your OpenShift cluster using the command-line interface (CLI) and the OpenShift web console.

Note

If you want to upgrade from a previous version of OpenShift AI rather than performing a new installation, see Upgrading OpenShift AI in a disconnected environment.

Note

If your OpenShift cluster uses a proxy to access the Internet, you can configure the proxy settings for the Red Hat OpenShift AI Operator. See Overriding proxy settings of an Operator for more information.

3.4.1. Installing the Red Hat OpenShift AI Operator by using the CLI

The following procedure shows how to use the OpenShift command-line interface (CLI) to install the Red Hat OpenShift AI Operator on your OpenShift cluster. You must install the Operator before you can install OpenShift AI components on the cluster.

Prerequisites

Procedure

  1. Open a new terminal window.
  2. Follow these steps to log in to your OpenShift cluster as a cluster administrator:

    1. In the upper-right corner of the OpenShift web console, click your user name and select Copy login command.
    2. After you have logged in, click Display token.
    3. Copy the Log in with this token command and paste it in the OpenShift command-line interface (CLI).

      $ oc login --token=<token> --server=<openshift_cluster_url>
  3. Create a namespace for installation of the Operator by performing the following actions:

    1. Create a namespace YAML file, for example, rhods-operator-namespace.yaml.

      apiVersion: v1
      kind: Namespace
      metadata:
        name: redhat-ods-operator 1
      1
      redhat-ods-operator is the recommended namespace for the Operator.
    2. Create the namespace in your OpenShift cluster.

      $ oc create -f rhods-operator-namespace.yaml

      You see output similar to the following:

      namespace/redhat-ods-operator created
  4. Create an operator group for installation of the Operator by performing the following actions:

    1. Create an OperatorGroup object custom resource (CR) file, for example, rhods-operator-group.yaml.

      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: rhods-operator
        namespace: redhat-ods-operator 1
      1
      You must specify the same namespace that you created earlier in this procedure.
    2. Create the OperatorGroup object in your OpenShift cluster.

      $ oc create -f rhods-operator-group.yaml

      You see output similar to the following:

      operatorgroup.operators.coreos.com/rhods-operator created
  5. Create a subscription for installation of the Operator by performing the following actions:

    1. Create a Subscription object CR file, for example, rhods-operator-subscription.yaml.

      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: rhods-operator
        namespace: redhat-ods-operator 1
      spec:
        name: rhods-operator
        channel: stable 2
        source: cs-redhat-operator-index
        sourceNamespace: openshift-marketplace
      1
      You must specify the same namespace that you created earlier in this procedure.
      2
      For channel, select a value of fast, stable, stable-x.y eus-x.y, or alpha. For more information about selecting an update channel, see Understanding update channels.
    2. Create the Subscription object in your OpenShift cluster to install the Operator.

      $ oc create -f rhods-operator-subscription.yaml

      You see output similar to the following:

      subscription.operators.coreos.com/rhods-operator created

Verification

  • In the OpenShift web console, click Operators Installed Operators and confirm that the Red Hat OpenShift AI Operator shows one of the following statuses:

    • Installing - installation is in progress; wait for this to change to Succeeded. This might take several minutes.
    • Succeeded - installation is successful.
  • In the web console, click Home Projects and confirm that the following project namespaces are visible and listed as Active:

    • redhat-ods-applications
    • redhat-ods-monitoring
    • redhat-ods-operator

3.4.2. Installing the Red Hat OpenShift AI Operator by using the web console

The following procedure shows how to use the OpenShift web console to install the Red Hat OpenShift AI Operator on your cluster. You must install the Operator before you can install OpenShift AI components on the cluster.

Prerequisites

  • You have a running OpenShift cluster, version 4.12 or greater, configured with a default storage class that can be dynamically provisioned.
  • You have cluster administrator privileges for your OpenShift cluster.
  • You have mirrored the required container images to a private registry. See Mirroring images to a private registry for a disconnected installation.

Procedure

  1. Log in to the OpenShift web console as a cluster administrator.
  2. In the web console, click Operators OperatorHub.
  3. On the OperatorHub page, locate the Red Hat OpenShift AI Operator by scrolling through the available Operators or by typing Red Hat OpenShift AI into the Filter by keyword box.
  4. Click the Red Hat OpenShift AI tile. The Red Hat OpenShift AI information pane opens.
  5. Select a Channel. For information about subscription update channels, see Understanding update channels.
  6. Select a Version.
  7. Click Install. The Install Operator page opens.
  8. Review or change the selected channel and version as needed.
  9. For Installation mode, note that the only available value is All namespaces on the cluster (default). This installation mode makes the Operator available to all namespaces in the cluster.
  10. For Installed Namespace, select Operator recommended Namespace: redhat-ods-operator.
  11. For Update approval, select one of the following update strategies:

    • Automatic: Your environment attempts to install new updates when they are available based on the content of your mirror.
    • Manual: A cluster administrator must approve any new updates before installation begins.

      Important

      By default, the Red Hat OpenShift AI Operator follows a sequential update process. This means that if there are several versions between the current version and the target version, Operator Lifecycle Manager (OLM) upgrades the Operator to each of the intermediate versions before it upgrades it to the final, target version.

      If you configure automatic upgrades, OLM automatically upgrades the Operator to the latest available version. If you configure manual upgrades, a cluster administrator must manually approve each sequential update between the current version and the final, target version.

      For information about supported versions, see Red Hat OpenShift AI Life Cycle.

  12. Click Install.

    The Installing Operators pane appears. When the installation finishes, a checkmark appears next to the Operator name.

Verification

  • In the OpenShift web console, click Operators Installed Operators and confirm that the Red Hat OpenShift AI Operator shows one of the following statuses:

    • Installing - installation is in progress; wait for this to change to Succeeded. This might take several minutes.
    • Succeeded - installation is successful.
  • In the web console, click Home Projects and confirm that the following project namespaces are visible and listed as Active:

    • redhat-ods-applications
    • redhat-ods-monitoring
    • redhat-ods-operator

3.5. Installing and managing Red Hat OpenShift AI components

The following procedures show how to use the command-line interface (CLI) and OpenShift web console to install and manage components of Red Hat OpenShift AI on your OpenShift cluster.

3.5.1. Installing Red Hat OpenShift AI components by using the CLI

The following procedure shows how to use the OpenShift command-line interface (CLI) to install specific components of Red Hat OpenShift AI on your OpenShift cluster.

Important

The following procedure describes how to create and configure a DataScienceCluster object to install Red Hat OpenShift AI components as part of a new installation. However, if you upgraded from version 1 of OpenShift AI (previously OpenShift Data Science), the upgrade process automatically created a default DataScienceCluster object. If you upgraded from a previous minor version, the upgrade process uses the settings from the previous version’s DataScienceCluster object. To inspect the DataScienceCluster object and change the installation status of Red Hat OpenShift AI components, see Updating the installation status of Red Hat OpenShift AI components by using the web console.

Prerequisites

Procedure

  1. Open a new terminal window.
  2. Follow these steps to log in to your OpenShift cluster as a cluster administrator:

    1. In the upper-right corner of the OpenShift web console, click your user name and select Copy login command.
    2. After you have logged in, click Display token.
    3. Copy the Log in with this token command and paste it in the OpenShift command-line interface (CLI).

      $ oc login --token=<token> --server=<openshift_cluster_url>
  3. Create a DataScienceCluster object custom resource (CR) file, for example, rhods-operator-dsc.yaml.

    apiVersion: datasciencecluster.opendatahub.io/v1
    kind: DataScienceCluster
    metadata:
      name: default-dsc
    spec:
      components:
        codeflare:
          managementState: Removed
        dashboard:
          managementState: Removed
        datasciencepipelines:
          managementState: Removed
        kserve:
          managementState: Removed 1 2
        kueue:
          managementState: Removed
        modelmeshserving:
          managementState: Removed
        ray:
          managementState: Removed
        workbenches:
          managementState: Removed
    1
    To fully install the KServe component, which is used by the single-model serving platform to serve large models, you must install Operators for Red Hat OpenShift Service Mesh and Red Hat OpenShift Serverless and perform additional configuration. See Serving large models.
    2
    If you have not enabled the KServe component (that is, you set the value of the managementState field to Removed), you must also disable the dependent Service Mesh component to avoid errors. See Disabling KServe dependencies.
  4. In the spec.components section of the CR, for each OpenShift AI component shown, set the value of the managementState field to either Managed or Removed. These values are defined as follows:

    Managed
    The Operator actively manages the component, installs it, and tries to keep it active. The Operator will upgrade the component only if it is safe to do so.
    Removed
    The Operator actively manages the component but does not install it. If the component is already installed, the Operator will try to remove it.
    Important
  5. Create the DataScienceCluster object in your OpenShift cluster to install the specified OpenShift AI components.

    $ oc create -f rhods-operator-dsc.yaml

    You see output similar to the following:

    datasciencecluster.datasciencecluster.opendatahub.io/default created

Verification

  • Confirm that there is a running pod for each component:

    1. In the OpenShift web console, click Workloads Pods.
    2. In the Project list at the top of the page, select redhat-ods-applications.
    3. In the applications namespace, confirm that there are running pods for each of the OpenShift AI components that you installed.
  • Confirm the status of all installed components:

    1. In the OpenShift web console, click Operators Installed Operators.
    2. Click the Red Hat OpenShift AI Operator.
    3. Click the Data Science Cluster tab and select the DataScienceCluster object called default-dsc.
    4. Select the YAML tab.
    5. In the installedComponents section, confirm that the components you installed have a status value of true.

      Note

      If a component shows with the component-name: {} format in the spec.components section of the CR, the component is not installed.

3.5.2. Installing Red Hat OpenShift AI components by using the web console

The following procedure shows how to use the OpenShift web console to install specific components of Red Hat OpenShift AI on your cluster.

Important

The following procedure describes how to create and configure a DataScienceCluster object to install Red Hat OpenShift AI components as part of a new installation. However, if you upgraded from version 1 of OpenShift AI (previously OpenShift Data Science), the upgrade process automatically created a default DataScienceCluster object. If you upgraded from a previous minor version, the upgrade process used the settings from the previous version’s DataScienceCluster object. To inspect the DataScienceCluster object and change the installation status of Red Hat OpenShift AI components, see Updating the installation status of Red Hat OpenShift AI components by using the web console.

Prerequisites

Procedure

  1. Log in to the OpenShift web console as a cluster administrator.
  2. In the web console, click Operators Installed Operators and then click the Red Hat OpenShift AI Operator.
  3. Create a DataScienceCluster object to install OpenShift AI components by performing the following actions:

    1. Click the Data Science Cluster tab.
    2. Click Create DataScienceCluster.
    3. For Configure via, select YAML view.

      An embedded YAML editor opens showing a default custom resource (CR) for the DataScienceCluster object.

    4. In the spec.components section of the CR, for each OpenShift AI component shown, set the value of the managementState field to either Managed or Removed. These values are defined as follows:

      Managed
      The Operator actively manages the component, installs it, and tries to keep it active. The Operator will upgrade the component only if it is safe to do so.
      Removed
      The Operator actively manages the component but does not install it. If the component is already installed, the Operator will try to remove it.
      Important
      • To learn how to install the KServe component, which is used by the single-model serving platform to serve large models, see Serving large models.
      • If you have not enabled the KServe component (that is, you set the value of the managementState field to Removed), you must also disable the dependent Service Mesh component to avoid errors. See Disabling KServe dependencies.
      • To learn how to configure the distributed workloads feature that uses the CodeFlare and KubeRay components, see Configuring distributed workloads.
  4. Click Create.

Verification

  • Confirm that there is a running pod for each component:

    1. In the OpenShift web console, click Workloads Pods.
    2. In the Project list at the top of the page, select redhat-ods-applications.
    3. In the applications namespace, confirm that there are running pods for each of the OpenShift AI components that you installed.
  • Confirm the status of all installed components:

    1. In the OpenShift web console, click Operators Installed Operators.
    2. Click the Red Hat OpenShift AI Operator.
    3. Click the Data Science Cluster tab and select the DataScienceCluster object called default-dsc.
    4. Select the YAML tab.
    5. In the installedComponents section, confirm that the components you installed have a status value of true.

      Note

      If a component shows with the component-name: {} format in the spec.components section of the CR, the component is not installed.

3.5.3. Updating the installation status of Red Hat OpenShift AI components by using the web console

The following procedure shows how to use the OpenShift web console to update the installation status of components of Red Hat OpenShift AI on your OpenShift cluster.

Important

If you upgraded from version 1 to version 2 of OpenShift AI, the upgrade process automatically created a default DataScienceCluster object and enabled several components of OpenShift AI. If you upgraded from a previous minor version, the upgrade process used the settings from the previous version’s DataScienceCluster object.

The following procedure describes how to edit the DataScienceCluster object to do the following:

  • Change the installation status of the existing Red Hat OpenShift AI components
  • Add additional components to the DataScienceCluster object that were not available in the previous version of OpenShift AI.

Prerequisites

  • The Red Hat OpenShift AI Operator is installed on your OpenShift cluster.
  • You have cluster administrator privileges for your OpenShift cluster.

Procedure

  1. Log in to the OpenShift web console as a cluster administrator.
  2. In the web console, click Operators Installed Operators and then click the Red Hat OpenShift AI Operator.
  3. Click the Data Science Cluster tab.
  4. On the DataScienceClusters page, click the default object.
  5. Click the YAML tab.

    An embedded YAML editor opens showing the custom resource (CR) file for the DataScienceCluster object.

  6. In the spec.components section of the CR, for each OpenShift AI component shown, set the value of the managementState field to either Managed or Removed. These values are defined as follows:

    Note

    If a component shows with the component-name: {} format in the spec.components section of the CR, the component is not installed.

    Managed
    The Operator actively manages the component, installs it, and tries to keep it active. The Operator will upgrade the component only if it is safe to do so.
    Removed
    The Operator actively manages the component but does not install it. If the component is already installed, the Operator will try to remove it.
    Important
    • To learn how to install the KServe component, which is used by the single-model serving platform to serve large models, see Serving large models.
    • If you have not enabled the KServe component (that is, you set the value of the managementState field to Removed), you must also disable the dependent Service Mesh component to avoid errors. See Disabling KServe dependencies.
    • If they are not already present in the CR file, you can install the CodeFlare, KubeRay, and Kueue components by adding the codeflare, ray, and kueue entries to the spec.components section of the CR and setting the managementState field for the components to Managed.
    • To learn how to configure the distributed workloads feature that uses the CodeFlare, KubeRay, and Kueue components, see Configuring distributed workloads.
    • To learn how to run distributed workloads in a disconnected environment, see Running distributed data science workloads in a disconnected environment.
  7. Click Save.

    For any components that you updated, OpenShift AI initiates a rollout that affects all pods to use the updated image.

Verification

  • Confirm that there is a running pod for each component:

    1. In the OpenShift web console, click Workloads Pods.
    2. In the Project list at the top of the page, select redhat-ods-applications.
    3. In the applications namespace, confirm that there are running pods for each of the OpenShift AI components that you installed.
  • Confirm the status of all installed components:

    1. In the OpenShift web console, click Operators Installed Operators.
    2. Click the Red Hat OpenShift AI Operator.
    3. Click the Data Science Cluster tab and select the DataScienceCluster object called default-dsc.
    4. Select the YAML tab.
    5. In the installedComponents section, confirm that the components you installed have a status value of true.

      Note

      If a component shows with the component-name: {} format in the spec.components section of the CR, the component is not installed.

3.5.4. Disabling KServe dependencies

If you have not enabled the KServe component (that is, you set the value of the managementState field to Removed), you must also disable the dependent Service Mesh component to avoid errors.

Prerequisites

  • You have used the OpenShift command-line interface (CLI) or web console to disable the KServe component.

Procedure

  1. Log in to the OpenShift web console as a cluster administrator.
  2. In the web console, click Operators Installed Operators and then click the Red Hat OpenShift AI Operator.
  3. Disable the OpenShift Service Mesh component as follows:

    1. Click the DSC Initialization tab.
    2. Click the default-dsci object.
    3. Click the YAML tab.
    4. In the spec section, add the serviceMesh component (if it is not already present) and configure the managementState field as shown:

      spec:
       serviceMesh:
         managementState: Removed
    5. Click Save.

Verification

  1. In the web console, click Operators Installed Operators and then click the Red Hat OpenShift AI Operator.

    The Operator details page opens.

  2. In the Conditions section, confirm that there is no ReconcileComplete condition with a status value of Unknown.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.