Chapter 2. Initializing InstructLab


You must initialize the InstructLab environments to begin working with the Red Hat Enterprise Linux AI models.

2.1. Creating your RHEL AI environment

You can start interacting with LLM models and the RHEL AI tooling by initializing the InstructLab environment.

Prerequisites

  • You installed RHEL AI with the bootable container image.
  • You have root user access on your machine.

Procedure

  1. Optional: To set up training profiles, you need to know the GPU accelerators in your machine. You can view your system information by running the following command:

    $ ilab system info
  2. Initialize InstructLab by running the following command:

    $ ilab config init
  3. The CLI prompts you to setup your config.yaml.

    Example output

    Welcome to InstructLab CLI. This guide will help you to setup your environment.
    Please provide the following values to initiate the
    environment [press Enter for defaults]:
    Generating `/home/<example-user>/.config/instructlab/config.yaml` and `/home/<example-user>/.local/share/instructlab/internal/train_configuration/profiles`...

  4. Follow the CLI prompts to set up your training hardware configurations. This updates your config.yaml file and adds the proper train configurations for training an LLM model. Type the number of the YAML file that matches your hardware specifications.

    Important

    These profiles only add the necessary configurations to the train section of your config.yaml file, therefore any profile can be selected for inference serving a model.

    Example output of selecting training profiles

    Please choose a train profile to use:
    [0] No profile (CPU-only)
    [1] A100_H100_x2.yaml
    [2] A100_H100_x4.yaml
    [3] A100_H100_x8.yaml
    [4] L40_x4.yaml
    [5] L40_x8.yaml
    [6] L4_x8.yaml
    Enter the number of your choice [hit enter for the default CPU-only profile] [0]:

    Example output of a completed ilab config init run.

    You selected: A100_H100_x8.yaml
    Initialization completed successfully, you're ready to start using `ilab`. Enjoy!

  5. Configuring your system’s GPU for inference serving: This step is only required if you are using Red Hat Enterprise Linux AI exclusively for inference serving.

    1. Edit your config.yaml file by running the following command:

      $ ilab config edit
    2. In the evaluate section of the configurations file, edit the gpus: parameter and add the number of accelerators on your machine.

      evaluate:
        base_branch: null
        base_model: ~/.cache/instructlab/models/granite-7b-starter
        branch: null
        gpus: <num-gpus>
    3. In the vllm section of the serve field in the configuration file, edit the gpus: and vllm_args: ["--tensor-parallel-size"] parameters and add the number of accelerators on your machine.

      serve:
        backend: vllm
        chat_template: auto
        host_port: 127.0.0.1:8000
        llama_cpp:
          gpu_layers: -1
          llm_family: ''
          max_ctx_size: 4096
        model_path: ~/.cache/instructlab/models/granite-7b-redhat-lab
        vllm:
          llm_family: ''
          vllm_args: ["--tensor-parallel-size", "<num-gpus>"]
          gpus: <num-gpus>
  6. If you want to use the skeleton taxonomy tree, which includes two skills and one knowledge qna.yaml file, you can clone the skeleton repository and place it in the taxonomy directory by running the following command:

    rm -rf ~/.local/share/instructlab/taxonomy/ ; git clone https://github.com/RedHatOfficial/rhelai-sample-taxonomy.git ~/.local/share/instructlab/taxonomy/

    Directory structure of the InstructLab environment

    ├─ ~/.cache/instructlab/models/ 
    1
    
    ├─ ~/.local/share/instructlab/datasets 
    2
    
    ├─ ~/.local/share/instructlab/taxonomy 
    3
    
    ├─ ~/.local/share/instructlab/phased/<phase1-or-phase2>/checkpoints/ 
    4

    1
    ~/.cache/instructlab/models/: Contains all downloaded large language models, including the saved output of ones you generate with RHEL AI.
    2
    ~/.local/share/instructlab/datasets/: Contains data output from the SDG phase, built on modifications to the taxonomy repository.
    3
    ~/.local/share/instructlab/taxonomy/: Contains the skill and knowledge data.
    4
    ~/.local/share/instructlab/phased/<phase1-or-phase2>/checkpoints/: Contains the output of the multi-phase training process

Verification

  1. You can view the full config.yaml file by running the following command

    $ ilab config show
  2. You can also manually edit the config.yaml file by running the following command:

    $ ilab config edit
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top