Chapter 5. Working with accelerators
Use accelerators, such as NVIDIA GPUs and Habana Gaudi devices, to optimize the performance of your end-to-end data science workflows.
5.1. Overview of accelerators
If you work with large data sets, you can use accelerators to optimize the performance of your data science models in OpenShift AI. With accelerators, you can scale your work, reduce latency, and increase productivity. You can use accelerators in OpenShift AI to assist your data scientists in the following tasks:
- Natural language processing (NLP)
- Inference
- Training deep neural networks
- Data cleansing and data processing
OpenShift AI supports the following accelerators:
NVIDIA graphics processing units (GPUs)
- To use compute-heavy workloads in your models, you can enable NVIDIA graphics processing units (GPUs) in OpenShift AI.
- To enable GPUs on OpenShift, you must install the NVIDIA GPU Operator.
Habana Gaudi devices (HPUs)
- Habana, an Intel company, provides hardware accelerators intended for deep learning workloads. You can use the Habana libraries and software associated with Habana Gaudi devices available from your notebook.
- Before you can successfully enable Habana Gaudi devices on OpenShift AI, you must install the necessary dependencies and version 1.10 of the HabanaAI Operator. For more information about how to enable your OpenShift environment for Habana Gaudi devices, see HabanaAI Operator for OpenShift.
- You can enable Habana Gaudi devices on-premises or with AWS DL1 compute nodes on an AWS instance.
Before you can use an accelerator in OpenShift AI, your OpenShift instance must contain an associated accelerator profile. For accelerators that are new to your deployment, you must configure an accelerator profile for the accelerator in context. You can create an accelerator profile from the Settings
5.2. Working with accelerator profiles
To configure accelerators for your data scientists to use in OpenShift AI, you must create an associated accelerator profile. An accelerator profile is a custom resource definition (CRD) on OpenShift that has an AcceleratorProfile resource, and defines the specification of the accelerator. You can create and manage accelerator profiles by selecting Settings
For accelerators that are new to your deployment, you must manually configure an accelerator profile for each accelerator. If your deployment contains an accelerator before you upgrade, the associated accelerator profile remains after the upgrade. You can manage the accelerators that appear to your data scientists by assigning specific accelerator profiles to your custom notebook images. This example shows the code for a Habana Gaudi 1 accelerator profile:
--- apiVersion: dashboard.opendatahub.io/v1alpha kind: AcceleratorProfile metadata: name: hpu-profile-first-gen-gaudi spec: displayName: Habana HPU - 1st Gen Gaudi description: First Generation Habana Gaudi device enabled: true identifier: habana.ai/gaudi tolerations: - effect: NoSchedule key: habana.ai/gaudi operator: Exists ---
The accelerator profile code appears on the Instances tab on the details page for the AcceleratorProfile
custom resource definition (CRD). For more information about accelerator profile attributes, see the following table:
Attribute | Type | Required | Description |
---|---|---|---|
displayName | String | Required | The display name of the accelerator profile. |
description | String | Optional | Descriptive text defining the accelerator profile. |
identifier | String | Required | A unique identifier defining the accelerator resource. |
enabled | Boolean | Required | Determines if the accelerator is visible in OpenShift AI. |
tolerations | Array | Optional | The tolerations that can apply to notebooks and serving runtimes that use the accelerator. For more information about the toleration attributes that OpenShift AI supports, see Toleration v1 core. |
Additional resources
5.2.1. Viewing accelerator profiles
If you have defined accelerator profiles for OpenShift AI, you can view, enable, and disable them from the Accelerator profiles page.
Prerequisites
- You have logged in to Red Hat OpenShift AI.
-
You are assigned the
cluster-admin
role in OpenShift Container Platform. - Your deployment contains existing accelerator profiles.
Procedure
From the OpenShift AI dashboard, click Settings
Accelerator profiles. The Accelerator profiles page appears, displaying existing accelerator profiles.
- Inspect the list of accelerator profiles. To enable or disable an accelerator profile, on the row containing the accelerator profile, click the toggle in the Enable column.
Verification
- The Accelerator profiles page appears appears, displaying existing accelerator profiles.
5.2.2. Creating an accelerator profile
To configure accelerators for your data scientists to use in OpenShift AI, you must create an associated accelerator profile.
Prerequisites
- You have logged in to Red Hat OpenShift AI.
-
You are assigned the
cluster-admin
role in OpenShift Container Platform.
Procedure
From the OpenShift AI dashboard, click Settings
Accelerator profiles. The Accelerator profiles page appears, displaying existing accelerator profiles. To enable or disable an existing accelerator profile, on the row containing the relevant accelerator profile, click the toggle in the Enable column.
Click Create accelerator profile.
The Create accelerator profile dialog appears.
- In the Name field, enter a name for the accelerator profile.
- In the Identifier field, enter a unique string that identifies the hardware accelerator associated with the accelerator profile.
- Optional: In the Description field, enter a description for the accelerator profile.
- To enable or disable the accelerator profile immediately after creation, click the toggle in the Enable column.
Optional: Add a toleration to schedule pods with matching taints.
Click Add toleration.
The Add toleration dialog opens.
From the Operator list, select one of the following options:
- Equal - The key/value/effect parameters must match. This is the default.
- Exists - The key/effect parameters must match. You must leave a blank value parameter, which matches any.
From the Effect list, select one of the following options:
- None
- NoSchedule - New pods that do not match the taint are not scheduled onto that node. Existing pods on the node remain.
- PreferNoSchedule - New pods that do not match the taint might be scheduled onto that node, but the scheduler tries not to. Existing pods on the node remain.
- NoExecute - New pods that do not match the taint cannot be scheduled onto that node. Existing pods on the node that do not have a matching toleration are removed.
- In the Key field, enter a toleration key. The key is any string, up to 253 characters. The key must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores.
- In the Value field, enter a toleration value. The value is any string, up to 63 characters. The value must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores.
In the Toleration Seconds section, select one of the following options to specify how long a pod stays bound to a node that has a node condition.
- Forever - Pods stays permanently bound to a node.
- Custom value - Enter a value, in seconds, to define how long pods stay bound to a node that has a node condition.
- Click Add.
- Click Create accelerator profile.
Verification
- The accelerator profile appears on the Accelerator profiles page.
- The Accelerator list appears on the Start a notebook server page. After you select an accelerator, the Number of accelerators field appears, which you can use to choose the number of accelerators for your notebook server.
-
The accelerator profile appears on the Instances tab on the details page for the
AcceleratorProfile
custom resource definition (CRD).
5.2.3. Updating an accelerator profile
You can update the existing accelerator profiles in your deployment. You might want to change important identifying information, such as the display name, the identifier, or the description.
Prerequisites
- You have logged in to Red Hat OpenShift AI.
-
You are assigned the
cluster-admin
role in OpenShift Container Platform. - The accelerator profile exists in your deployment.
Procedure
From the OpenShift AI dashboard, click Settings
Notebook images. The Notebook images page appears. Previously imported notebook images are displayed. To enable or disable a previously imported notebook image, on the row containing the relevant notebook image, click the toggle in the Enable column.
Click the action menu (⋮) and select Edit from the list.
The Edit accelerator profile dialog opens.
- In the Name field, update the accelerator profile name.
- In the Identifier field, update the unique string that identifies the hardware accelerator associated with the accelerator profile, if applicable.
- Optional: In the Description field, update the accelerator profile.
- To enable or disable the accelerator profile immediately after creation, click the toggle in the Enable column.
Optional: Add a toleration to schedule pods with matching taints.
Click Add toleration.
The Add toleration dialog opens.
From the Operator list, select one of the following options:
- Equal - The key/value/effect parameters must match. This is the default.
- Exists - The key/effect parameters must match. You must leave a blank value parameter, which matches any.
From the Effect list, select one of the following options:
- None
- NoSchedule - New pods that do not match the taint are not scheduled onto that node. Existing pods on the node remain.
- PreferNoSchedule - New pods that do not match the taint might be scheduled onto that node, but the scheduler tries not to. Existing pods on the node remain.
- NoExecute - New pods that do not match the taint cannot be scheduled onto that node. Existing pods on the node that do not have a matching toleration are removed.
- In the Key field, enter a toleration key. The key is any string, up to 253 characters. The key must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores.
- In the Value field, enter a toleration value. The value is any string, up to 63 characters. The value must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores.
In the Toleration Seconds section, select one of the following options to specify how long a pod stays bound to a node that has a node condition.
- Forever - Pods stays permanently bound to a node.
- Custom value - Enter a value, in seconds, to define how long pods stay bound to a node that has a node condition.
- Click Add.
If your accelerator profile contains existing tolerations, you can edit them.
- Click the action menu (⋮) on the row containing the toleration that you want to edit and select Edit from the list.
- Complete the applicable fields to update the details of the toleration.
- Click Update.
- Click Update accelerator profile.
Verification
- If your accelerator profile has new identifying information, this information appears in the Accelerator list on the Start a notebook server page.
5.2.4. Deleting an accelerator profile
To discard accelerator profiles that you no longer require, you can delete them so that they do not appear on the dashboard.
Prerequisites
- You have logged in to Red Hat OpenShift AI.
-
You are assigned the
cluster-admin
role in OpenShift Container Platform. - The accelerator profile that you want to delete exists in your deployment.
Procedure
From the OpenShift AI dashboard, click Settings
Accelerator profiles. The Accelerator profiles page appears, displaying existing accelerator profiles.
Click the action menu (⋮) beside the accelerator profile that you want to delete and click Delete.
The Delete accelerator profile dialog opens.
- Enter the name of the accelerator profile in the text field to confirm that you intend to delete it.
- Click Delete.
Verification
- The accelerator profile no longer appears on the Accelerator profiles page.
5.2.5. Configuring a recommended accelerator for notebook images
To help you indicate the most suitable accelerators to your data scientists, you can configure a recommended tag to appear on the dashboard.
Prerequisites
- You have logged in to OpenShift Container Platform.
-
You have the
cluster-admin
role in OpenShift Container Platform. - You have existing notebook images in your deployment.
Procedure
From the OpenShift AI dashboard, click Settings
Notebook images. The Notebook images page appears. Previously imported notebook images are displayed.
Click the action menu (⋮) and select Edit from the list.
The Update notebook image dialog opens.
- From the Accelerator identifier list, select an identifier to set its accelerator as recommended with the notebook image. If the notebook image contains only one accelerator identifier, the identifier name displays by default.
Click Update.
NoteIf you have already configured an accelerator identifier for a notebook image, you can specify a recommended accelerator for the notebook image by creating an associated accelerator profile. To do this, click Create profile on the row containing the notebook image and complete the relevant fields. If the notebook image does not contain an accelerator identifier, you must manually configure one before creating an associated accelerator profile.
Verification
- When your data scientists select an accelerator with a specific notebook image, a tag appears next to the corresponding accelerator indicating its compatibility.
5.2.6. Configuring a recommended accelerator for serving runtimes
To help you indicate the most suitable accelerators to your data scientists, you can configure a recommended accelerator tag for your serving runtimes.
Prerequisites
- You have logged in to Red Hat OpenShift AI.
-
If you use specialized OpenShift AI groups, you are part of the admin group (for example,
{oai-admin-group}
) in OpenShift.
Procedure
From the OpenShift AI dashboard, click Settings > Serving runtimes.
The Serving runtimes page opens and shows the model-serving runtimes that are already installed and enabled in your OpenShift AI deployment. By default, the OpenVINO Model Server runtime is pre-installed and enabled in OpenShift AI.
Edit your custom runtime that you want to add the recommended accelerator tag to, click the action menu (⋮) and select Edit.
A page with an embedded YAML editor opens.
NoteYou cannot directly edit the OpenVINO Model Server runtime that is included in OpenShift AI by default. However, you can clone this runtime and edit the cloned version. You can then add the edited clone as a new, custom runtime. To do this, click the action menu beside the OpenVINO Model Server and select Duplicate.
In the editor, enter the YAML code to apply the annotation
opendatahub.io/recommended-accelerators
. The excerpt in this example shows the annotation to set a recommended tag for an NVIDIA GPU accelerator:metadata: annotations: opendatahub.io/recommended-accelerators: '["nvidia.com/gpu"]'
- Click Update.
Verification
- When your data scientists select an accelerator with a specific serving runtime, a tag appears next to the corresponding accelerator indicating its compatibility.
5.3. Habana Gaudi integration
To accelerate your high-performance deep learning (DL) models, you can integrate Habana Gaudi devices in OpenShift AI. OpenShift AI also includes the HabanaAI notebook image. This notebook image is pre-built and ready for your data scientists to use after you install or upgrade OpenShift AI.
Before you can successfully enable Habana Gaudi devices in OpenShift AI, you must install the necessary dependencies and install the HabanaAI Operator. This allows your data scientists to use Habana libraries and software associated with Habana Gaudi devices from their notebooks. For more information about how to enable your OpenShift environment for Habana Gaudi devices, see HabanaAI Operator for OpenShift.
Currently, Habana Gaudi integration is only supported in OpenShift 4.12.
You can use Habana Gaudi accelerators on OpenShift AI with version 1.10.0 of the Habana Gaudi Operator. For information about the supported configurations for version 1.10 of the Habana Gaudi Operator, see Support Matrix v1.10.0.
In addition, the version of the HabanaAI Operator that you install must match the version of the HabanaAI notebook image in your deployment.
You can use Habana Gaudi devices in an Amazon EC2 DL1 instance on OpenShift. Therefore, your OpenShift platform must support EC2 DL1 instances. Habana Gaudi accelerators are available to your data scientists when they create a workbench, serve a model, and create a notebook.
To identify the Habana Gaudi devices present in your deployment, use the lspci
utility. For more information, see lspci(8) - Linux man page.
If the lspci
utility indicates that Habana Gaudi devices are present in your deployment, it does not necessarily mean that the devices are ready to use.
Before you can use your Habana Gaudi devices, you must enable them in your OpenShift environment and configure an accelerator profile for each device. For more information about how to enable your OpenShift environment for Habana Gaudi devices, see HabanaAI Operator for OpenShift.
Additional resources
5.3.1. Enabling Habana Gaudi devices
Before you can use Habana Gaudi devices in OpenShift AI, you must install the necessary dependencies and deploy the HabanaAI Operator.
Prerequisites
- You have logged in to OpenShift Container Platform.
-
You have the
cluster-admin
role in OpenShift Container Platform.
Procedure
- To enable Habana Gaudi devices in OpenShift AI, follow the instructions at HabanaAI Operator for OpenShift.
From the OpenShift AI dashboard, click Settings
Accelerator profiles. The Accelerator profiles page appears, displaying existing accelerator profiles. To enable or disable an existing accelerator profile, on the row containing the relevant accelerator profile, click the toggle in the Enable column.
Click Create accelerator profile.
The Create accelerator profile dialog opens.
- In the Name field, enter a name for the Habana Gaudi device.
-
In the Identifier field, enter a unique string that identifies the Habana Gaudi device, for example,
habana.ai/gaudi
. - Optional: In the Description field, enter a description for the Habana Gaudi device.
- To enable or disable the accelerator profile for the Habana Gaudi device immediately after creation, click the toggle in the Enable column.
Optional: Add a toleration to schedule pods with matching taints.
Click Add toleration.
The Add toleration dialog opens.
From the Operator list, select one of the following options:
- Equal - The key/value/effect parameters must match. This is the default.
- Exists - The key/effect parameters must match. You must leave a blank value parameter, which matches any.
From the Effect list, select one of the following options:
- None
- NoSchedule - New pods that do not match the taint are not scheduled onto that node. Existing pods on the node remain.
- PreferNoSchedule - New pods that do not match the taint might be scheduled onto that node, but the scheduler tries not to. Existing pods on the node remain.
- NoExecute - New pods that do not match the taint cannot be scheduled onto that node. Existing pods on the node that do not have a matching toleration are removed.
-
In the Key field, enter the toleration key
habana.ai/gaudi
. The key is any string, up to 253 characters. The key must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores. - In the Value field, enter a toleration value. The value is any string, up to 63 characters. The value must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores.
In the Toleration Seconds section, select one of the following options to specify how long a pod stays bound to a node that has a node condition.
- Forever - Pods stays permanently bound to a node.
- Custom value - Enter a value, in seconds, to define how long pods stay bound to a node that has a node condition.
- Click Add.
- Click Create accelerator profile.
Verification
From the Administrator perspective, the following Operators appear on the Operators
Installed Operators page. - HabanaAI
- Node Feature Discovery (NFD)
- Kernel Module Management (KMM)
- The Accelerator list displays the Habana Gaudi accelerator on the Start a notebook server page. After you select an accelerator, the Number of accelerators field appears, which you can use to choose the number of accelerators for your notebook server.
- The accelerator profile appears on the Accelerator profiles page
-
The accelerator profile appears on the Instances tab on the details page for the
AcceleratorProfile
custom resource definition (CRD).
Additional resources