Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 3. Preparing OpenShift AI for use in IBM Cloud Pak for Data
The procedures in this section show how to prepare Red Hat OpenShift AI for use in IBM Cloud Pak for Data version 5.0 or greater. These versions of Cloud Pak for Data include watsonx.ai.
The procedures in this section apply when you want to prepare a new installation of Red Hat OpenShift AI for use in IBM Cloud Pak for Data version 5.0 or greater. The procedures show how to install the KServe component of OpenShift AI in raw mode, which means that KServe does not have any other components as dependencies. However, if you have an existing deployment of OpenShift AI with KServe and its dependencies already installed and enabled, you do not need to modify the configuration for use in Cloud Pak for Data.
3.1. Installing the Red Hat OpenShift AI Operator by using the CLI Copier lienLien copié sur presse-papiers!
Follow this procedure only if you are preparing Red Hat OpenShift AI for use in IBM Cloud Pak for Data version 5.0 or greater. These versions of Cloud Pak for Data include watsonx.ai. If this use case does not apply to your organization, see Installing and deploying OpenShift AI for more generally applicable instructions.
This procedure shows how to use the OpenShift command-line interface (CLI) to install the Red Hat OpenShift AI Operator on your OpenShift Container Platform cluster. You must install the Operator before you can manage the installation of OpenShift AI components.
Prerequisites
- You have a running OpenShift Container Platform cluster, version 4.12 or greater, configured with a default storage class that can be dynamically provisioned.
- You have cluster administrator privileges for your OpenShift Container Platform cluster.
- You have downloaded and installed the OpenShift command-line interface (CLI). See Installing the OpenShift CLI.
Procedure
- Open a new terminal window.
In the OpenShift command-line interface (CLI), log in to your OpenShift Container Platform cluster as a cluster administrator, as shown in the following example:
oc login <openshift_cluster_url> -u <admin_username> -p <password>
$ oc login <openshift_cluster_url> -u <admin_username> -p <password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new YAML file with the following contents:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Defines the recommended
redhat-ods-operatornamespace for installation of the Operator. - 2
- Defines an Operator group for installation of the Operator. You must specify the same namespace that you defined earlier in the file. Also, another Operator group must not exist in the namespace.
- 3
- Defines a subscription for the Operator. You must specify the same namespace that you defined earlier in the file.
- 4
- For
channel, specify a value ofeus-2.8. This channel name indicates that OpenShift AI 2.8 is an Extended Update Support (EUS) version. For more information, see Red Hat OpenShift AI Self-Managed Life Cycle.
Deploy the YAML file to create the namespace, Operator group, and subscription that you defined.
oc create -f <file_name>.yaml
$ oc create -f <file_name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow You see output similar to the following:
namespace/redhat-ods-operator created operatorgroup.operators.coreos.com/rhods-operator created subscription.operators.coreos.com/rhods-operator created
namespace/redhat-ods-operator created operatorgroup.operators.coreos.com/rhods-operator created subscription.operators.coreos.com/rhods-operator createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create another new YAML file with the following contents:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Defines a
DSCInitializationobject calleddefault-dsci. TheDSCInitializationobject is used by the Operator to manage resources that are common to all product components. - 2
- Specifies that the
serviceMeshcomponent (which is a dependency for KServe in some setups) is not installed. This setting is required when preparing OpenShift AI for use in IBM products.
Verification
In the OpenShift CLI, check that there is a running pod for the Operator by performing the following actions:
Check the pods in the
redhat-ods-operatorproject.oc get pods -n redhat-ods-operator
$ oc get pods -n redhat-ods-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that there is a
rhods-operator-*pod with aRunningstatus, as shown in the following example:NAME READY STATUS RESTARTS AGE rhods-operator-56c85d44c9-vtk74 1/1 Running 0 3h57m
NAME READY STATUS RESTARTS AGE rhods-operator-56c85d44c9-vtk74 1/1 Running 0 3h57mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
In the OpenShift CLI, check that the
DSCInitializationobject that you created is running by performing the following actions:Check the cluster for
DSCInitializationobjects.oc get dscinitialization
$ oc get dscinitializationCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that there is a
default-dsciobject with aReadystatus, as shown in the following example:NAME AGE PHASE default-dsci 4d18h Ready
NAME AGE PHASE default-dsci 4d18h ReadyCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2. Managing Red Hat OpenShift AI components by using the CLI Copier lienLien copié sur presse-papiers!
The following procedure shows how to use the OpenShift command-line interface (CLI) to manage the installation of specific components of Red Hat OpenShift AI on your OpenShift Container Platform cluster.
Follow this procedure only if you are preparing Red Hat OpenShift AI for use in IBM Cloud Pak for Data version 5.0 or greater. These versions of Cloud Pak for Data include watsonx.ai. If this use case does not apply to your organization, see Installing and managing Red Hat OpenShift AI components for more generally applicable instructions.
Prerequisites
- The Red Hat OpenShift AI Operator is installed on your OpenShift Container Platform cluster. See Installing the Red Hat OpenShift AI Operator by using the CLI.
- You have cluster administrator privileges for your OpenShift Container Platform cluster.
- You have downloaded and installed the OpenShift command-line interface (CLI). See Installing the OpenShift CLI.
Procedure
- Open a new terminal window.
In the OpenShift command-line interface (CLI), log in to your OpenShift Container Platform cluster as a cluster administrator, as shown in the following example:
oc login <openshift_cluster_url> -u <admin_username> -p <password>
$ oc login <openshift_cluster_url> -u <admin_username> -p <password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new YAML file with the following contents:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Defines a new
DataScienceClusterobject calleddefault-dsc. TheDataScienceClusterobject is used by the Operator to manage OpenShift AI components. - 2
- Specifies that the KServe component is installed and managed by the Operator in raw mode, which means that KServe does not have any other components as dependencies.
- 3
- Specifies that the
servingcomponent (which is a dependency for KServe in some setups) is not installed. This setting is required when preparing OpenShift AI for use in Cloud Pak for Data.
In addition, observe that the value of the
managementStatefield for all other OpenShift AI components is set toRemoved. This means that the components are not installed.In general, the
ManagedandRemovedvalues are described as follows:- Managed
- The Operator actively manages the component, installs it, and tries to keep it active. The Operator will upgrade the component only if it is safe to do so.
- Removed
- The Operator actively manages the component but does not install it. If the component is already installed, the Operator will try to remove it.
Create the
DataScienceClusterobject on your OpenShift cluster.oc create -f <file_name>.yaml
$ oc create -f <file_name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow You see output similar to the following:
datasciencecluster.datasciencecluster.opendatahub.io/default-dsc created
datasciencecluster.datasciencecluster.opendatahub.io/default-dsc createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
In the OpenShift CLI, check that there is a running pod for the KServe controller by performing the following actions:
Check the pods in the
redhat-ods-applicationsproject.oc get pods -n redhat-ods-applications
$ oc get pods -n redhat-ods-applicationsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that there is a
kserve-controller-manager-*pod with aRunningstatus, as shown in the following example:NAME READY STATUS RESTARTS AGE kserve-controller-manager-57796d5b44-sh9n5 1/1 Running 0 4m57s
NAME READY STATUS RESTARTS AGE kserve-controller-manager-57796d5b44-sh9n5 1/1 Running 0 4m57sCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3. Editing the model inferencing configuration Copier lienLien copié sur presse-papiers!
Particular use cases in IBM Cloud Pak for Data version 5.0 or greater (which include watsonx.ai) might require customizations to the model inferencing configuration used by Red Hat OpenShift AI. Before you can make any such customizations, you need to put the inferencing configuration file in an editable state. In addition, you must make a specific configuration update that prevents errors when using OpenShift AI in Cloud Pak for Data and watsonx.ai.
Follow this procedure only if you are preparing Red Hat OpenShift AI for use in IBM Cloud Pak for Data version 5.0 or greater. These versions of Cloud Pak for Data include watsonx.ai.
Prerequisites
- The Red Hat OpenShift AI Operator is installed on your OpenShift Container Platform cluster. See Installing the Red Hat OpenShift AI Operator by using the CLI.
- You have cluster administrator privileges for your OpenShift Container Platform cluster.
Procedure
- Log in to the OpenShift Container Platform web console as a cluster administrator.
-
In the web console, click Workloads
ConfigMaps. - In the Project list, click redhat-ods-applications.
-
On the list of
ConfigMapresources, click the inferenceservice-config resource and then click the YAML tab. In the
metadata.annotationssection of the file, addopendatahub.io/managed: 'false'as shown:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The additional annotation makes the inferencing configuration file editable.
To prevent errors when using OpenShift AI in Cloud Pak for Data (including watsonx.ai), update the configuration as follows:
In the YAML file, locate the following entry:
"domainTemplate": "{{ .Name }}-{{ .Namespace }}.{{ .IngressDomain }}"domainTemplate": "{{ .Name }}-{{ .Namespace }}.{{ .IngressDomain }}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the value of the
domainTemplatefield as shown:"domainTemplate": "example.com"
"domainTemplate": "example.com"Copy to Clipboard Copied! Toggle word wrap Toggle overflow This new, explicitly specified value ensures that OpenShift AI cannot generate a value for the
domainTemplatefield that exceeds the maximum length and causes an error. ThedomainTemplatefield is not used by the raw deployment mode that you configured for the KServe component when preparing OpenShift AI for use in Cloud Pak for Data.
To directly mount the volume and avoid copying the model, update the value of the
enableDirectPvcVolumeMountfield as shown:"enableDirectPvcVolumeMount": true
"enableDirectPvcVolumeMount": trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Save.