Chapter 4. Installing the distributed workloads components
To use the distributed workloads feature in OpenShift AI, you must install several components.
Prerequisites
-
You have logged in to OpenShift with the
cluster-admin
role and you can access the data science cluster. - You have installed Red Hat OpenShift AI.
- You have sufficient resources. In addition to the minimum OpenShift AI resources described in Installing and deploying OpenShift AI (for disconnected environments, see Deploying OpenShift AI in a disconnected environment), you need 1.6 vCPU and 2 GiB memory to deploy the distributed workloads infrastructure.
- You have removed any previously installed instances of the CodeFlare Operator, as described in the Knowledgebase solution How to migrate from a separately installed CodeFlare Operator in your data science cluster.
If you want to use graphics processing units (GPUs), you have enabled GPU support in OpenShift AI. If you use NVIDIA GPUs, see Enabling NVIDIA GPUs. If you use AMD GPUs, see AMD GPU integration.
NoteIn OpenShift AI, Red Hat supports the use of accelerators within the same cluster only.
Starting from Red Hat OpenShift AI 2.19, Red Hat supports remote direct memory access (RDMA) for NVIDIA GPUs only, enabling them to communicate directly with each other by using NVIDIA GPUDirect RDMA across either Ethernet or InfiniBand networks.
If you want to use self-signed certificates, you have added them to a central Certificate Authority (CA) bundle as described in Working with certificates (for disconnected environments, see Working with certificates). No additional configuration is necessary to use those certificates with distributed workloads. The centrally configured self-signed certificates are automatically available in the workload pods at the following mount points:
Cluster-wide CA bundle:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow /etc/pki/tls/certs/odh-trusted-ca-bundle.crt /etc/ssl/certs/odh-trusted-ca-bundle.crt
/etc/pki/tls/certs/odh-trusted-ca-bundle.crt /etc/ssl/certs/odh-trusted-ca-bundle.crt
Custom CA bundle:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow /etc/pki/tls/certs/odh-ca-bundle.crt /etc/ssl/certs/odh-ca-bundle.crt
/etc/pki/tls/certs/odh-ca-bundle.crt /etc/ssl/certs/odh-ca-bundle.crt
Procedure
-
In the OpenShift console, click Operators
Installed Operators. - Search for the Red Hat OpenShift AI Operator, and then click the Operator name to open the Operator details page.
- Click the Data Science Cluster tab.
- Click the default instance name (for example, default-dsc) to open the instance details page.
- Click the YAML tab to show the instance specifications.
Enable the required distributed workloads components. In the
spec.components
section, set themanagementState
field correctly for the required components:-
If you want to use the CodeFlare framework to tune models, enable the
codeflare
,kueue
, andray
components. -
If you want to use the Kubeflow Training Operator to tune models, enable the
kueue
andtrainingoperator
components. - The list of required components depends on whether the distributed workload is run from a pipeline or notebook or both, as shown in the following table.
Table 4.1. Components required for distributed workloads Component Pipelines only Notebooks only Pipelines and notebooks codeflare
Managed
Managed
Managed
dashboard
Managed
Managed
Managed
datasciencepipelines
Managed
Removed
Managed
kueue
Managed
Managed
Managed
ray
Managed
Managed
Managed
trainingoperator
Managed
Managed
Managed
workbenches
Removed
Managed
Managed
-
If you want to use the CodeFlare framework to tune models, enable the
-
Click Save. After a short time, the components with a
Managed
state are ready.
Verification
Check the status of the codeflare-operator-manager, kubeflow-training-operator, kuberay-operator, and kueue-controller-manager pods, as follows:
- In the OpenShift console, from the Project list, select redhat-ods-applications.
-
Click Workloads
Deployments. Search for the codeflare-operator-manager, kubeflow-training-operator, kuberay-operator, and kueue-controller-manager deployments. In each case, check the status as follows:
- Click the deployment name to open the deployment details page.
- Click the Pods tab.
Check the pod status.
When the status of the codeflare-operator-manager-<pod-id>, kubeflow-training-operator-<pod-id>, kuberay-operator-<pod-id>, and kueue-controller-manager-<pod-id> pods is Running, the pods are ready to use.
- To see more information about each pod, click the pod name to open the pod details page, and then click the Logs tab.
Next Step
Configure the distributed workloads feature as described in Managing distributed workloads.