이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 2. Initializing InstructLab
You must initialize the InstructLab environments to begin working with the Red Hat Enterprise Linux AI models.
2.1. Creating your RHEL AI environment
You can start interacting with LLM models and the RHEL AI tooling by initializing the InstructLab environment.
Prerequisites
- You installed RHEL AI with the bootable container image.
- You have root user access on your machine.
Procedure
Optional: To set up training profiles, you need to know the GPU accelerators in your machine. You can view your system information by running the following command:
$ ilab system info
Initialize InstructLab by running the following command:
$ ilab config init
The CLI prompts you to setup your
config.yaml
.Example output
Welcome to InstructLab CLI. This guide will help you to setup your environment. Please provide the following values to initiate the environment [press Enter for defaults]: Generating `/home/<example-user>/.config/instructlab/config.yaml` and `/home/<example-user>/.local/share/instructlab/internal/train_configuration/profiles`...
Follow the CLI prompts to set up your training hardware configurations. This updates your
config.yaml
file and adds the propertrain
configurations for training an LLM model. Type the number of the YAML file that matches your hardware specifications.ImportantThese profiles only add the necessary configurations to the
train
section of yourconfig.yaml
file, therefore any profile can be selected for inference serving a model.Example output of selecting training profiles
Please choose a train profile to use: [0] No profile (CPU-only) [1] A100_H100_x2.yaml [2] A100_H100_x4.yaml [3] A100_H100_x8.yaml [4] L40_x4.yaml [5] L40_x8.yaml [6] L4_x8.yaml Enter the number of your choice [hit enter for the default CPU-only profile] [0]:
Example output of a completed
ilab config init
run.You selected: A100_H100_x8.yaml Initialization completed successfully, you're ready to start using `ilab`. Enjoy!
Configuring your system’s GPU for inference serving: This step is only required if you are using Red Hat Enterprise Linux AI exclusively for inference serving.
Edit your
config.yaml
file by running the following command:$ ilab config edit
In the
evaluate
section of the configurations file, edit thegpus:
parameter and add the number of accelerators on your machine.evaluate: base_branch: null base_model: ~/.cache/instructlab/models/granite-7b-starter branch: null gpus: <num-gpus>
In the
vllm
section of theserve
field in the configuration file, edit thegpus:
andvllm_args: ["--tensor-parallel-size"]
parameters and add the number of accelerators on your machine.serve: backend: vllm chat_template: auto host_port: 127.0.0.1:8000 llama_cpp: gpu_layers: -1 llm_family: '' max_ctx_size: 4096 model_path: ~/.cache/instructlab/models/granite-7b-redhat-lab vllm: llm_family: '' vllm_args: ["--tensor-parallel-size", "<num-gpus>"] gpus: <num-gpus>
If you want to use the skeleton taxonomy tree, which includes two skills and one knowledge
qna.yaml
file, you can clone the skeleton repository and place it in thetaxonomy
directory by running the following command:rm -rf ~/.local/share/instructlab/taxonomy/ ; git clone https://github.com/RedHatOfficial/rhelai-sample-taxonomy.git ~/.local/share/instructlab/taxonomy/
Directory structure of the InstructLab environment
├─ ~/.cache/instructlab/models/ 1 ├─ ~/.local/share/instructlab/datasets 2 ├─ ~/.local/share/instructlab/taxonomy 3 ├─ ~/.local/share/instructlab/phased/<phase1-or-phase2>/checkpoints/ 4
- 1
~/.cache/instructlab/models/
: Contains all downloaded large language models, including the saved output of ones you generate with RHEL AI.- 2
~/.local/share/instructlab/datasets/
: Contains data output from the SDG phase, built on modifications to the taxonomy repository.- 3
~/.local/share/instructlab/taxonomy/
: Contains the skill and knowledge data.- 4
~/.local/share/instructlab/phased/<phase1-or-phase2>/checkpoints/
: Contains the output of the multi-phase training process
Verification
You can view the full
config.yaml
file by running the following command$ ilab config show
You can also manually edit the
config.yaml
file by running the following command:$ ilab config edit