이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 1. Overview of AI workloads on OpenShift Container Platform
OpenShift Container Platform provides a secure, scalable foundation for running artificial intelligence (AI) workloads across training, inference, and data science workflows.
1.1. Operators for running AI workloads 링크 복사링크가 클립보드에 복사되었습니다!
You can use Operators to run artificial intelligence (AI) and machine learning (ML) workloads on OpenShift Container Platform. With Operators, you can build a customized environment that meets your specific AI/ML requirements while continuing to use OpenShift Container Platform as the core platform for your applications.
OpenShift Container Platform provides several Operators that can help you run AI workloads:
- Leader Worker Set Operator
You can use the Leader Worker Set Operator to enable large-scale AI inference workloads to run reliably across nodes with synchronization between leader and worker processes. Without proper coordination, large training runs might fail or stall.
For more information, see "Leader Worker Set Operator overview".
- Red Hat build of Kueue
You can use Red Hat build of Kueue to provide structured queues and prioritization so that workloads are handled fairly and efficiently. Without proper prioritization, important jobs might be delayed while less critical jobs occupy resources.
For more information, see Introduction to Red Hat build of Kueue in the Red Hat build of Kueue documentation.