이 콘텐츠는 선택한 언어로 제공되지 않습니다.

Chapter 2. Initializing InstructLab


You must initialize the InstructLab environments to begin working with the Red Hat Enterprise Linux AI models.

2.1. Creating your RHEL AI environment

You can start interacting with LLM models and the RHEL AI tooling by initializing the InstructLab environment.

Prerequisites

  • You installed RHEL AI with the bootable container image.
  • You have root user access on your machine.

Procedure

  1. Optional: To set up training profiles, you need to know the GPU accelerators in your machine. You can view your system information by running the following command:

    $ ilab system info
  2. Initialize InstructLab by running the following command:

    $ ilab config init
  3. The CLI prompts you to setup your config.yaml.

    Example output

    Welcome to InstructLab CLI. This guide will help you to setup your environment.
    Please provide the following values to initiate the
    environment [press Enter for defaults]:
    Generating `/home/<example-user>/.config/instructlab/config.yaml` and `/home/<example-user>/.local/share/instructlab/internal/train_configuration/profiles`...

  4. Follow the CLI prompts to set up your training hardware configurations. This updates your config.yaml file and adds the proper train configurations for training an LLM model. Type the number of the YAML file that matches your hardware specifications.

    Important

    These profiles only add the necessary configurations to the train section of your config.yaml file, therefore any profile can be selected for inference serving a model.

    Example output of selecting training profiles

    Please choose a train profile to use:
    [0] No profile (CPU-only)
    [1] A100_H100_x2.yaml
    [2] A100_H100_x4.yaml
    [3] A100_H100_x8.yaml
    [4] L40_x4.yaml
    [5] L40_x8.yaml
    [6] L4_x8.yaml
    Enter the number of your choice [hit enter for the default CPU-only profile] [0]:

    Example output of a completed ilab config init run.

    You selected: A100_H100_x8.yaml
    Initialization completed successfully, you're ready to start using `ilab`. Enjoy!

  5. Configuring your system’s GPU for inference serving: This step is only required if you are using Red Hat Enterprise Linux AI exclusively for inference serving.

    1. Edit your config.yaml file by running the following command:

      $ ilab config edit
    2. In the evaluate section of the configurations file, edit the gpus: parameter and add the number of accelerators on your machine.

      evaluate:
        base_branch: null
        base_model: ~/.cache/instructlab/models/granite-7b-starter
        branch: null
        gpus: <num-gpus>
    3. In the vllm section of the serve field in the configuration file, edit the gpus: and vllm_args: ["--tensor-parallel-size"] parameters and add the number of accelerators on your machine.

      serve:
        backend: vllm
        chat_template: auto
        host_port: 127.0.0.1:8000
        llama_cpp:
          gpu_layers: -1
          llm_family: ''
          max_ctx_size: 4096
        model_path: ~/.cache/instructlab/models/granite-7b-redhat-lab
        vllm:
          llm_family: ''
          vllm_args: ["--tensor-parallel-size", "<num-gpus>"]
          gpus: <num-gpus>
  6. If you want to use the skeleton taxonomy tree, which includes two skills and one knowledge qna.yaml file, you can clone the skeleton repository and place it in the taxonomy directory by running the following command:

    rm -rf ~/.local/share/instructlab/taxonomy/ ; git clone https://github.com/RedHatOfficial/rhelai-sample-taxonomy.git ~/.local/share/instructlab/taxonomy/

    Directory structure of the InstructLab environment

    ├─ ~/.cache/instructlab/models/ 1
    ├─ ~/.local/share/instructlab/datasets 2
    ├─ ~/.local/share/instructlab/taxonomy 3
    ├─ ~/.local/share/instructlab/phased/<phase1-or-phase2>/checkpoints/ 4

    1
    ~/.cache/instructlab/models/: Contains all downloaded large language models, including the saved output of ones you generate with RHEL AI.
    2
    ~/.local/share/instructlab/datasets/: Contains data output from the SDG phase, built on modifications to the taxonomy repository.
    3
    ~/.local/share/instructlab/taxonomy/: Contains the skill and knowledge data.
    4
    ~/.local/share/instructlab/phased/<phase1-or-phase2>/checkpoints/: Contains the output of the multi-phase training process

Verification

  1. You can view the full config.yaml file by running the following command

    $ ilab config show
  2. You can also manually edit the config.yaml file by running the following command:

    $ ilab config edit
Red Hat logoGithubRedditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다. 최신 업데이트를 확인하세요.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

© 2024 Red Hat, Inc.