Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 4. Using NVIDIA GPU resources with serverless applications
You can use NVIDIA GPU resources in serverless applications on OpenShift Container Platform to accelerate compute-intensive workloads such as machine learning and data processing.
4.1. Specifying GPU requirements for a service Link kopierenLink in die Zwischenablage kopiert!
After you enable GPU resources for your OpenShift Container Platform cluster, specify GPU requirements for a Knative service by using the Knative (kn) CLI.
Prerequisites
- You have installed the OpenShift Serverless Operator, Knative Serving and Knative Eventing on the cluster.
-
You have installed the Knative (
kn) CLI. - You have enabled the GPU resources for your OpenShift Container Platform cluster.
- You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
Using NVIDIA GPU resources is not supported for IBM zSystems and IBM Power on OpenShift Container Platform or OpenShift Dedicated.
Procedure
Create a Knative service and set the GPU resource requirement limit to
1by using the--limit nvidia.com/gpu=1flag:$ kn service create hello --image <service_image> --limit nvidia.com/gpu=1A GPU resource requirement limit of
1means that the service has 1 GPU resource dedicated. Services do not share GPU resources. Any other services that require GPU resources must wait until the GPU resource is no longer in use.A limit of 1 GPU also means that applications exceeding usage of 1 GPU resource are restricted. If a service requests more than 1 GPU resource, it is deployed on a node where the GPU resource requirements can be met.
Optional. For an existing service, you can change the GPU resource requirement limit to
3by using the--limit nvidia.com/gpu=3flag:$ kn service update hello --limit nvidia.com/gpu=3