Working with distributed workloads
Red Hat OpenShift AI Self-Managed 2.11
Use distributed workloads for faster and more efficient data processing and model training
Abstract
Distributed workloads enable data scientists to use multiple cluster nodes in parallel for faster and more efficient data processing and model training. The CodeFlare framework simplifies task orchestration and monitoring, and offers seamless integration for automated resource scaling and optimal node utilization with advanced GPU support.