이 콘텐츠는 선택한 언어로 제공되지 않습니다.

Chapter 1. Overview of enabling LAB-tuning


Important

LAB-tuning is currently available in Red Hat OpenShift AI 2.23 as a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Data scientists can use LAB-tuning in OpenShift AI to run an end-to-end workflow for customizing large language models (LLMs). The LAB (Large-scale Alignment for chatBots) method provides a more efficient alternative to traditional fine-tuning by using taxonomy-guided synthetic data generation (SDG) combined with a multi-phase training process. LAB-tuning workflows can be launched directly from the OpenShift AI dashboard using the preconfigured InstructLab pipeline, simplifying the tuning process.

LAB-tuning depends on several OpenShift AI components working together to support model customization. It uses data science pipelines to run the tuning workflow, KServe to deploy and serve the teacher and judge models, and the Training Operator to run distributed model training across GPU-enabled nodes. LAB-tuning also relies on the model registry to manage model versions, storage connections (such as S3 or OCI) to store pipeline artifacts and model outputs, and GPU hardware profiles to schedule training workloads.

To enable LAB-tuning, an OpenShift cluster administrator must configure the required infrastructure and platform components by completing the following tasks:

  • Install the required Operators
  • Install the required components
  • Configure a storage class that supports dynamic provisioning

A cluster administrator or an OpenShift AI administrator must perform additional setup within the OpenShift AI dashboard:

  • Make LAB-tuning features visible in the dashboard
  • Create a model registry
  • Create a GPU hardware profile

1.1. Requirements for LAB-tuning

  • You have an OpenShift cluster with cluster administrator access.
  • Your OpenShift cluster has at least one node with 1 NVIDIA GPUs (for example, NVIDIA L40S 48 GB) for the LAB-tuning process. Multiple GPUs on the same or different nodes are required to run distributed LAB-tuning workloads.
  • To deploy the teacher model (rhelai1/modelcar-mixtral-8x7b-instruct-v0-1:1.4), the OpenShift cluster has a worker node with one or multiple GPUs capable of running the model (for example, a2-ultragpu-4g, which contains 4 x NVIDIA A100 with 80GB vRAM) and has at least 100 GiB of available disk storage to store the model.
  • To deploy the judge model (rhelai1/modelcar-prometheus-8x7b-v2-0:1.4), the OpenShift cluster has a worker node with one or multiple GPUs capable of running this model (for example, a2-ultragpu-2g, which contains 2 x NVIDIA A100 with 80GB vRAM) and has at least at least 100 GiB of available disk storage to store the model.
  • Your environment meets the prerequisites for installing the required Operators and using the required components, storage, model registry, and GPU hardware profiles.
맨 위로 이동
Red Hat logoGithubredditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다. 최신 업데이트를 확인하세요.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

Theme

© 2025 Red Hat