Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 9. About GPU time slicing
GPU time slicing enables multiple workloads to share a single physical GPU by dividing processing time in short, alternating time slots. This method improves resource utilization, reduces idle GPU time, and allows multiple users to run AI/ML workloads concurrently in OpenShift AI. The NVIDIA GPU Operator manages this scheduling based on a time-slicing-config ConfigMap that defines the number of GPU slices for each physical GPU.
Time-slicing differs from Multi-Instance GPU (MIG) partitioning. While MIG provides memory and fault isolation, time-slicing shares the same GPU memory across workloads without strict isolation. Time-slicing is ideal for lightweight inference tasks, data preprocessing, and other scenarios where full GPU isolation is unnecessary.
Consider the following points when using GPU time slicing:
- Memory sharing: All workloads share GPU memory. High memory usage by one workload can impact others.
- Performance trade-offs: While time slicing allows multiple workloads to share a GPU, it does not provide strict resource isolation like MIG.
- GPU compatibility: Time slicing is supported on specific NVIDIA GPUs.
9.1. Enabling GPU time slicing Copier lienLien copié sur presse-papiers!
To enable GPU time slicing in OpenShift AI, you must configure the NVIDIA GPU Operator to allow multiple workloads to share a single GPU.
Prerequisites
- You have logged in to OpenShift.
-
You have the
cluster-adminrole in OpenShift. - You have installed and configured the NVIDIA GPU Operator.
- The relevant nodes in your deployment contain NVIDIA GPUs.
- The GPU in your deployment supports time slicing.
You have installed the OpenShift CLI (
oc) as described in the appropriate documentation for your cluster:- Installing the OpenShift CLI for OpenShift Dedicated
- Installing the OpenShift CLI for Red Hat OpenShift Service on AWS (classic architecture)
Procedure
Create a config map named
time-slicing-configin the namespace that is used by the GPU operator. For NVIDIA GPUs, this is thenvidia-gpu-operatornamespace.- Log in to the OpenShift web console as a cluster administrator.
-
In the Administrator perspective, navigate to Workloads
ConfigMaps. - On the ConfigMap details page, click the Create Config Map button.
- On the Create Config Map page, for Configure via, select YAML view.
In the Data field, enter the YAML code for the relevant GPU. Here is an example of a
time-slicing-configconfig map for an NVIDIA T4 GPU:Note- You can change the number of replicas to control the number of GPU slices available for each physical GPU.
- Increasing replicas might increase the risk of Out of Memory (OOM) errors if workloads exceed available GPU memory.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Create.
Update the
gpu-cluster-policycluster policy to reference thetime-slicing-configconfig map:-
In the Administrator perspective, navigate to Operators
Installed Operators. - Search for the NVIDIA GPU Operator, and then click the Operator name to open the Operator details page.
- Click the ClusterPolicy tab.
-
Select the
gpu-cluster-policyresource from the list to open the ClusterPolicy details page. Click the YAML tab and update the
spec.devicePluginsection to reference thetime-slicing-configconfig map. Here is an example of agpu-cluster-policycluster policy for an NVIDIA T4 GPU:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Save.
-
In the Administrator perspective, navigate to Operators
Label the relevant machine set to apply time slicing:
-
In the Administrator perspective, navigate to Compute
Machine Sets. - Select the machine set for GPU time slicing from the list.
On the MachineSet details page, click the YAML tab and update the
spec.template.spec.metadata.labelssection to label the relevant machine set. Here is an example of a machine set with the appropriate machine label for an NVIDIA T4 GPU:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Save.
-
In the Administrator perspective, navigate to Compute
Verification
-
Log in to the OpenShift CLI (
oc). Verify that you have applied the config map correctly:
oc get configmap time-slicing-config -n nvidia-gpu-operator -o yaml
oc get configmap time-slicing-config -n nvidia-gpu-operator -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the cluster policy includes the time-slicing configuration:
oc get clusterpolicy gpu-cluster-policy -o yaml
oc get clusterpolicy gpu-cluster-policy -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the label is applied to nodes:
oc get nodes --show-labels | grep nvidia.com/device-plugin.config
oc get nodes --show-labels | grep nvidia.com/device-plugin.configCopy to Clipboard Copied! Toggle word wrap Toggle overflow
If workloads do not appear to be sharing the GPU, verify that the NVIDIA device plugin is running and that the correct labels are applied.