Chapter 4. Preparing OpenShift AI for use in IBM Cloud Pak for Data
The procedures in this section show how to prepare Red Hat OpenShift AI for use in IBM Cloud Pak for Data version 5.0.3 or greater. These versions of Cloud Pak for Data include watsonx.ai.
The procedures in this section apply when you want to prepare a new installation of Red Hat OpenShift AI for use in IBM Cloud Pak for Data version 5.0.3 or greater. The procedures show how to install the KServe component of OpenShift AI in raw mode, which means that KServe does not have any other components as dependencies. However, if you have an existing deployment of OpenShift AI with KServe and its dependencies already installed and enabled, you do not need to modify the configuration for use in Cloud Pak for Data.
4.1. Installing the Red Hat OpenShift AI Operator by using the CLI
Follow this procedure only if you are preparing Red Hat OpenShift AI for use in IBM Cloud Pak for Data version 5.0.3 or greater. These versions of Cloud Pak for Data include watsonx.ai. If this use case does not apply to your organization, see Installing and deploying OpenShift AI for more generally applicable instructions.
This procedure shows how to use the OpenShift command-line interface (CLI) to install the Red Hat OpenShift AI Operator on your OpenShift cluster. You must install the Operator before you can manage the installation of OpenShift AI components.
Prerequisites
- You have a running OpenShift cluster, version 4.12 or greater, configured with a default storage class that can be dynamically provisioned.
- You have cluster administrator privileges for your OpenShift cluster.
- You have downloaded and installed the OpenShift command-line interface (CLI). See Installing the OpenShift CLI.
Procedure
- Open a new terminal window.
In the OpenShift command-line interface (CLI), log in to your OpenShift cluster as a cluster administrator, as shown in the following example:
$ oc login <openshift_cluster_url> -u <admin_username> -p <password>
Create a new YAML file with the following contents:
apiVersion: v1 kind: Namespace 1 metadata: name: redhat-ods-operator --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup 2 metadata: name: rhods-operator namespace: redhat-ods-operator --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription 3 metadata: name: rhods-operator namespace: redhat-ods-operator spec: name: rhods-operator channel: embedded 4 source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: "DISABLE_DSC_CONFIG" 5
- 1
- Defines the required
redhat-ods-operator
namespace for installation of the Operator. - 2
- Defines an Operator group for installation of the Operator. You must specify the same namespace that you defined earlier in the file. Also, another Operator group must not exist in the namespace.
- 3
- Defines a subscription for the Operator. You must specify the same namespace that you defined earlier in the file.
- 4
- Sets the update channel. You must specify the
embedded
update channel. - 5
- Specifies that DSCI and DSC custom resources are not dynamically created. This setting is required to install the KServe component of OpenShift AI in raw mode, which means that KServe does not have any other components as dependencies.
Deploy the YAML file to create the namespace, Operator group, and subscription that you defined.
$ oc create -f <file_name>.yaml
You see output similar to the following:
namespace/redhat-ods-operator created operatorgroup.operators.coreos.com/rhods-operator created subscription.operators.coreos.com/rhods-operator created
Create another new YAML file with the following contents:
apiVersion: dscinitialization.opendatahub.io/v1 kind: DSCInitialization 1 metadata: name: default-dsci spec: applicationsNamespace: redhat-ods-applications monitoring: managementState: Managed namespace: redhat-ods-monitoring serviceMesh: managementState: Removed 2 trustedCABundle: managementState: Managed customCABundle: ""
- 1
- Defines a
DSCInitialization
object calleddefault-dsci
. TheDSCInitialization
object is used by the Operator to manage resources that are common to all product components. - 2
- Specifies that the
serviceMesh
component (which is a dependency for KServe in some setups) is not installed. This setting is required when preparing OpenShift AI for use in IBM products.
Create the
DSCInitialization
object on your OpenShift cluster.$ oc create -f <dsci_file_name>.yaml
Verification
In the OpenShift CLI, check that there is a running pod for the Operator by performing the following actions:
Check the pods in the
redhat-ods-operator
project.$ oc get pods -n redhat-ods-operator
Confirm that there is a
rhods-operator-*
pod with aRunning
status, as shown in the following example:NAME READY STATUS RESTARTS AGE rhods-operator-56c85d44c9-vtk74 1/1 Running 0 3h57m
In the OpenShift CLI, check that the
DSCInitialization
object that you created is running by performing the following actions:Check the cluster for
DSCInitialization
objects.$ oc get dscinitialization
Confirm that there is a
default-dsci
object with aReady
status, as shown in the following example:NAME AGE PHASE default-dsci 4d18h Ready
Additional resources
4.2. Managing Red Hat OpenShift AI components by using the CLI
The following procedure shows how to use the OpenShift command-line interface (CLI) to manage the installation of specific components of Red Hat OpenShift AI on your OpenShift cluster.
Follow this procedure only if you are preparing Red Hat OpenShift AI for use in IBM Cloud Pak for Data version 5.0.3 or greater. These versions of Cloud Pak for Data include watsonx.ai. If this use case does not apply to your organization, see Installing and managing Red Hat OpenShift AI components for more generally applicable instructions.
Prerequisites
- The Red Hat OpenShift AI Operator is installed on your OpenShift cluster. See Installing the Red Hat OpenShift AI Operator by using the CLI.
- You have cluster administrator privileges for your OpenShift cluster.
- You have downloaded and installed the OpenShift command-line interface (CLI). See Installing the OpenShift CLI.
Procedure
- Open a new terminal window.
In the OpenShift command-line interface (CLI), log in to your OpenShift cluster as a cluster administrator, as shown in the following example:
$ oc login <openshift_cluster_url> -u <admin_username> -p <password>
Create a new YAML file with the following contents:
apiVersion: datasciencecluster.opendatahub.io/v1 kind: DataScienceCluster 1 metadata: name: default-dsc spec: components: codeflare: managementState: Removed dashboard: managementState: Removed datasciencepipelines: managementState: Removed kserve: managementState: Managed defaultDeploymentMode: RawDeployment 2 serving: managementState: Removed 3 name: knative-serving kueue: managementState: Removed modelmeshserving: managementState: Removed ray: managementState: Removed trainingoperator: managementState: Managed 4 trustyai: managementState: Removed workbenches: managementState: Removed
- 1
- Defines a new
DataScienceCluster
object calleddefault-dsc
. TheDataScienceCluster
object is used by the Operator to manage OpenShift AI components. - 2
- Specifies that the KServe component is installed and managed by the Operator in raw mode, which means that KServe does not have any other components as dependencies.
- 3
- Specifies that the
serving
component (which is a dependency for KServe in some setups) is not installed. This setting is required when preparing OpenShift AI for use in Cloud Pak for Data. - 4
- Specifies that the Training Operator is installed and managed by the Operator. This component is required if you want to use the Kubeflow Training Operator to tune models.
In addition, observe that the value of the
managementState
field for all other OpenShift AI components is set toRemoved
. This value means that the components are not installed.In general, the
Managed
andRemoved
values are described as follows:- Managed
- The Operator actively manages the component, installs it, and tries to keep it active. The Operator will upgrade the component only if it is safe to do so.
- Removed
- The Operator actively manages the component but does not install it. If the component is already installed, the Operator will try to remove it.
Create the
DataScienceCluster
object on your OpenShift cluster.$ oc create -f <file_name>.yaml
You see output similar to the following:
datasciencecluster.datasciencecluster.opendatahub.io/default-dsc created
Verification
In the OpenShift CLI, check that there are running pods for the KServe controller and Kubeflow Training Operator by performing the following actions:
Check the pods in the
redhat-ods-applications
project.$ oc get pods -n redhat-ods-applications
Confirm that there is a
kserve-controller-manager-*
and akubeflow-training-operator-*
pod with aRunning
status, similar to the following example:NAME READY STATUS RESTARTS AGE kserve-controller-manager-57796d5b44-sh9n5 1/1 Running 0 4m57s kubeflow-training-operator-7b99d5584c-rh5hb 1/1 Running 0 4m57s
4.3. Editing the model inferencing configuration
Particular use cases in IBM Cloud Pak for Data version 5.0.3 or greater (which include watsonx.ai) might require customizations to the model inferencing configuration used by Red Hat OpenShift AI. Before you can make any such customizations, you need to put the inferencing configuration file in an editable state. In addition, you must make a specific configuration update that prevents errors when using OpenShift AI in Cloud Pak for Data and watsonx.ai.
Follow this procedure only if you are preparing Red Hat OpenShift AI for use in IBM Cloud Pak for Data version 5.0.3 or greater. These versions of Cloud Pak for Data include watsonx.ai.
Prerequisites
- The Red Hat OpenShift AI Operator is installed on your OpenShift cluster. See Installing the Red Hat OpenShift AI Operator by using the CLI.
- You have cluster administrator privileges for your OpenShift cluster.
Procedure
- Log in to the OpenShift web console as a cluster administrator.
-
In the web console, click Workloads
ConfigMaps. - In the Project list, click redhat-ods-applications.
-
On the list of
ConfigMap
resources, click the inferenceservice-config resource and then click the YAML tab. In the
metadata.annotations
section of the file, addopendatahub.io/managed: 'false'
as shown:metadata: annotations: internal.config.kubernetes.io/previousKinds: ConfigMap internal.config.kubernetes.io/previousNames: inferenceservice-config internal.config.kubernetes.io/previousNamespaces: opendatahub opendatahub.io/managed: 'false'
The additional annotation makes the inferencing configuration file editable.
To prevent errors when using OpenShift AI in Cloud Pak for Data (including watsonx.ai), update the configuration as follows:
In the YAML file, locate the following entry:
"domainTemplate": "{{ .Name }}-{{ .Namespace }}.{{ .IngressDomain }}
Update the value of the
domainTemplate
field as shown:"domainTemplate": "example.com"
This new, explicitly specified value ensures that OpenShift AI cannot generate a value for the
domainTemplate
field that exceeds the maximum length and causes an error. ThedomainTemplate
field is not used by the raw deployment mode that you configured for the KServe component when preparing OpenShift AI for use in Cloud Pak for Data.
- Click Save.