このコンテンツは選択した言語では利用できません。

Chapter 1. Overview of enabling LAB-tuning


Important

LAB-tuning is currently available in Red Hat OpenShift AI 2.23 as a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Data scientists can use LAB-tuning in OpenShift AI to run an end-to-end workflow for customizing large language models (LLMs). The LAB (Large-scale Alignment for chatBots) method provides a more efficient alternative to traditional fine-tuning by using taxonomy-guided synthetic data generation (SDG) combined with a multi-phase training process. LAB-tuning workflows can be launched directly from the OpenShift AI dashboard using the preconfigured InstructLab pipeline, simplifying the tuning process.

LAB-tuning depends on several OpenShift AI components working together to support model customization. It uses data science pipelines to run the tuning workflow, KServe to deploy and serve the teacher and judge models, and the Training Operator to run distributed model training across GPU-enabled nodes. LAB-tuning also relies on the model registry to manage model versions, storage connections (such as S3 or OCI) to store pipeline artifacts and model outputs, and GPU hardware profiles to schedule training workloads.

To enable LAB-tuning, an OpenShift cluster administrator must configure the required infrastructure and platform components by completing the following tasks:

  • Install the required Operators
  • Install the required components
  • Configure a storage class that supports dynamic provisioning

A cluster administrator or an OpenShift AI administrator must perform additional setup within the OpenShift AI dashboard:

  • Make LAB-tuning features visible in the dashboard
  • Create a model registry
  • Create a GPU hardware profile

1.1. Requirements for LAB-tuning

  • You have an OpenShift cluster with cluster administrator access.
  • Your OpenShift cluster has at least one node with 1 NVIDIA GPUs (for example, NVIDIA L40S 48 GB) for the LAB-tuning process. Multiple GPUs on the same or different nodes are required to run distributed LAB-tuning workloads.
  • To deploy the teacher model (rhelai1/modelcar-mixtral-8x7b-instruct-v0-1:1.4), the OpenShift cluster has a worker node with one or multiple GPUs capable of running the model (for example, a2-ultragpu-4g, which contains 4 x NVIDIA A100 with 80GB vRAM) and has at least 100 GiB of available disk storage to store the model.
  • To deploy the judge model (rhelai1/modelcar-prometheus-8x7b-v2-0:1.4), the OpenShift cluster has a worker node with one or multiple GPUs capable of running this model (for example, a2-ultragpu-2g, which contains 2 x NVIDIA A100 with 80GB vRAM) and has at least at least 100 GiB of available disk storage to store the model.
  • Your environment meets the prerequisites for installing the required Operators and using the required components, storage, model registry, and GPU hardware profiles.
トップに戻る
Red Hat logoGithubredditYoutubeTwitter

詳細情報

試用、購入および販売

コミュニティー

Red Hat ドキュメントについて

Red Hat をお使いのお客様が、信頼できるコンテンツが含まれている製品やサービスを活用することで、イノベーションを行い、目標を達成できるようにします。 最新の更新を見る.

多様性を受け入れるオープンソースの強化

Red Hat では、コード、ドキュメント、Web プロパティーにおける配慮に欠ける用語の置き換えに取り組んでいます。このような変更は、段階的に実施される予定です。詳細情報: Red Hat ブログ.

会社概要

Red Hat は、企業がコアとなるデータセンターからネットワークエッジに至るまで、各種プラットフォームや環境全体で作業を簡素化できるように、強化されたソリューションを提供しています。

Theme

© 2025 Red Hat