Chapter 3. Preparing OpenShift AI for use in IBM Cloud Pak for Data


The procedures in this section show how to prepare Red Hat OpenShift AI for use in IBM Cloud Pak for Data version 5.0 or greater. These versions of Cloud Pak for Data include watsonx.ai.

Important

The procedures in this section apply when you want to prepare a new installation of Red Hat OpenShift AI for use in IBM Cloud Pak for Data version 5.0 or greater. The procedures show how to install the KServe component of OpenShift AI in raw mode, which means that KServe does not have any other components as dependencies. However, if you have an existing deployment of OpenShift AI with KServe and its dependencies already installed and enabled, you do not need to modify the configuration for use in Cloud Pak for Data.

3.1. Installing the Red Hat OpenShift AI Operator by using the CLI

Important

Follow this procedure only if you are preparing Red Hat OpenShift AI for use in IBM Cloud Pak for Data version 5.0 or greater. These versions of Cloud Pak for Data include watsonx.ai. If this use case does not apply to your organization, see Installing and deploying OpenShift AI for more generally applicable instructions.

This procedure shows how to use the OpenShift command-line interface (CLI) to install the Red Hat OpenShift AI Operator on your OpenShift Container Platform cluster. You must install the Operator before you can manage the installation of OpenShift AI components.

Prerequisites

  • You have a running OpenShift Container Platform cluster, version 4.12 or greater, configured with a default storage class that can be dynamically provisioned.
  • You have cluster administrator privileges for your OpenShift Container Platform cluster.
  • You have downloaded and installed the OpenShift command-line interface (CLI). See Installing the OpenShift CLI.

Procedure

  1. Open a new terminal window.
  2. In the OpenShift command-line interface (CLI), log in to your OpenShift Container Platform cluster as a cluster administrator, as shown in the following example:

    $ oc login <openshift_cluster_url> -u <admin_username> -p <password>
  3. Create a new YAML file with the following contents:

    apiVersion: v1
    kind: Namespace 1
    metadata:
      name: redhat-ods-operator
    ---
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup 2
    metadata:
      name: rhods-operator
      namespace: redhat-ods-operator
    ---
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription 3
    metadata:
      name: rhods-operator
      namespace: redhat-ods-operator
    spec:
      name: rhods-operator
      channel: eus-2.8  4
      source: redhat-operators
      sourceNamespace: openshift-marketplace
      config:
         env:
            - name: "DISABLE_DSC_CONFIG"
    1
    Defines the recommended redhat-ods-operator namespace for installation of the Operator.
    2
    Defines an Operator group for installation of the Operator. You must specify the same namespace that you defined earlier in the file. Also, another Operator group must not exist in the namespace.
    3
    Defines a subscription for the Operator. You must specify the same namespace that you defined earlier in the file.
    4
    For channel, specify a value of eus-2.8. This channel name indicates that OpenShift AI 2.8 is an Extended Update Support (EUS) version. For more information, see Red Hat OpenShift AI Self-Managed Life Cycle.
  4. Deploy the YAML file to create the namespace, Operator group, and subscription that you defined.

    $ oc create -f <file_name>.yaml

    You see output similar to the following:

    namespace/redhat-ods-operator created
    operatorgroup.operators.coreos.com/rhods-operator created
    subscription.operators.coreos.com/rhods-operator created
  5. Create another new YAML file with the following contents:

    apiVersion: dscinitialization.opendatahub.io/v1
    kind: DSCInitialization 1
    metadata:
      name: default-dsci
    spec:
      applicationsNamespace: redhat-ods-applications
      monitoring:
        managementState: Managed
        namespace: redhat-ods-monitoring
      serviceMesh:
        managementState: Removed 2
      trustedCABundle:
        managementState: Managed
        customCABundle: ""
    1
    Defines a DSCInitialization object called default-dsci. The DSCInitialization object is used by the Operator to manage resources that are common to all product components.
    2
    Specifies that the serviceMesh component (which is a dependency for KServe in some setups) is not installed. This setting is required when preparing OpenShift AI for use in IBM products.

Verification

  • In the OpenShift CLI, check that there is a running pod for the Operator by performing the following actions:

    • Check the pods in the redhat-ods-operator project.

      $ oc get pods -n redhat-ods-operator
    • Confirm that there is a rhods-operator-* pod with a Running status, as shown in the following example:

      NAME                              READY   STATUS    RESTARTS   AGE
      rhods-operator-56c85d44c9-vtk74   1/1     Running   0          3h57m
  • In the OpenShift CLI, check that the DSCInitialization object that you created is running by performing the following actions:

    • Check the cluster for DSCInitialization objects.

      $ oc get dscinitialization
    • Confirm that there is a default-dsci object with a Ready status, as shown in the following example:

      NAME           AGE     PHASE
      default-dsci   4d18h   Ready

Additional resources

3.2. Managing Red Hat OpenShift AI components by using the CLI

The following procedure shows how to use the OpenShift command-line interface (CLI) to manage the installation of specific components of Red Hat OpenShift AI on your OpenShift Container Platform cluster.

Important

Follow this procedure only if you are preparing Red Hat OpenShift AI for use in IBM Cloud Pak for Data version 5.0 or greater. These versions of Cloud Pak for Data include watsonx.ai. If this use case does not apply to your organization, see Installing and managing Red Hat OpenShift AI components for more generally applicable instructions.

Prerequisites

Procedure

  1. Open a new terminal window.
  2. In the OpenShift command-line interface (CLI), log in to your OpenShift Container Platform cluster as a cluster administrator, as shown in the following example:

    $ oc login <openshift_cluster_url> -u <admin_username> -p <password>
  3. Create a new YAML file with the following contents:

    kind: DataScienceCluster 1
    apiVersion: datasciencecluster.opendatahub.io/v1
    metadata:
      name: default-dsc
      labels:
        app.kubernetes.io/name: datasciencecluster
        app.kubernetes.io/instance: default-dsc
        app.kubernetes.io/part-of: rhods-operator
        app.kubernetes.io/managed-by: kustomize
        app.kubernetes.io/created-by: rhods-operator
    spec:
      components:
        codeflare:
          managementState: Removed
        dashboard:
          managementState: Removed
        datasciencepipelines:
          managementState: Removed
        kserve:
          managementState: Managed
          defaultDeploymentMode: RawDeployment 2
          serving:
            managementState: Removed 3
            name: knative-serving
        kueue:
          managementState: Removed
        modelmeshserving:
          managementState: Removed
        ray:
          managementState: Removed
        workbenches:
          managementState: Removed
    1
    Defines a new DataScienceCluster object called default-dsc. The DataScienceCluster object is used by the Operator to manage OpenShift AI components.
    2
    Specifies that the KServe component is installed and managed by the Operator in raw mode, which means that KServe does not have any other components as dependencies.
    3
    Specifies that the serving component (which is a dependency for KServe in some setups) is not installed. This setting is required when preparing OpenShift AI for use in Cloud Pak for Data.

    In addition, observe that the value of the managementState field for all other OpenShift AI components is set to Removed. This means that the components are not installed.

    In general, the Managed and Removed values are described as follows:

    Managed
    The Operator actively manages the component, installs it, and tries to keep it active. The Operator will upgrade the component only if it is safe to do so.
    Removed
    The Operator actively manages the component but does not install it. If the component is already installed, the Operator will try to remove it.
  4. Create the DataScienceCluster object on your OpenShift cluster.

    $ oc create -f <file_name>.yaml

    You see output similar to the following:

    datasciencecluster.datasciencecluster.opendatahub.io/default-dsc created

Verification

  • In the OpenShift CLI, check that there is a running pod for the KServe controller by performing the following actions:

    • Check the pods in the redhat-ods-applications project.

      $ oc get pods -n redhat-ods-applications
    • Confirm that there is a kserve-controller-manager-* pod with a Running status, as shown in the following example:

      NAME                                         READY   STATUS      RESTARTS   AGE
      kserve-controller-manager-57796d5b44-sh9n5   1/1     Running     0          4m57s

3.3. Editing the model inferencing configuration

Particular use cases in IBM Cloud Pak for Data version 5.0 or greater (which include watsonx.ai) might require customizations to the model inferencing configuration used by Red Hat OpenShift AI. Before you can make any such customizations, you need to put the inferencing configuration file in an editable state. In addition, you must make a specific configuration update that prevents errors when using OpenShift AI in Cloud Pak for Data and watsonx.ai.

Important

Follow this procedure only if you are preparing Red Hat OpenShift AI for use in IBM Cloud Pak for Data version 5.0 or greater. These versions of Cloud Pak for Data include watsonx.ai.

Prerequisites

Procedure

  1. Log in to the OpenShift Container Platform web console as a cluster administrator.
  2. In the web console, click Workloads ConfigMaps.
  3. In the Project list, click redhat-ods-applications.
  4. On the list of ConfigMap resources, click the inferenceservice-config resource and then click the YAML tab.
  5. In the metadata.annotations section of the file, add opendatahub.io/managed: 'false' as shown:

    metadata:
      annotations:
        internal.config.kubernetes.io/previousKinds: ConfigMap
        internal.config.kubernetes.io/previousNames: inferenceservice-config
        internal.config.kubernetes.io/previousNamespaces: opendatahub
        opendatahub.io/managed: 'false'

    The additional annotation makes the inferencing configuration file editable.

  6. To prevent errors when using OpenShift AI in Cloud Pak for Data (including watsonx.ai), update the configuration as follows:

    1. In the YAML file, locate the following entry:

      "domainTemplate": "{{ .Name }}-{{ .Namespace }}.{{ .IngressDomain }}
    2. Update the value of the domainTemplate field as shown:

      "domainTemplate": "example.com"

      This new, explicitly specified value ensures that OpenShift AI cannot generate a value for the domainTemplate field that exceeds the maximum length and causes an error. The domainTemplate field is not used by the raw deployment mode that you configured for the KServe component when preparing OpenShift AI for use in Cloud Pak for Data.

  7. To directly mount the volume and avoid copying the model, update the value of the enableDirectPvcVolumeMount field as shown:

    "enableDirectPvcVolumeMount": true
  8. Click Save.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.