このコンテンツは選択した言語では利用できません。
Chapter 3. Downloading models
Red Hat Enterprise Linux AI allows you to customize or chat with various Large Language Models (LLMs) provided and built by Red Hat and IBM. You can download these models from the Red Hat RHEL AI registry.
Large Language Models (LLMs) | Type | Size | Purpose | Support |
---|---|---|---|---|
| Base model | 12.6 GB | Base model for customizing, training and fine-tuning | General availability |
| LAB fine-tuned granite model | 12.6 GB | Granite model for serving and inferencing | General availability |
| LAB fine-tuned granite code model | 15.0 GB | LAB fine-tuned granite code model for serving and inferencing | Technology preview |
| Granite fine-tuned code model | 15.0 GB | Granite code model for serving and inferencing | Technology preview |
| Teacher/critic model | 87.0 GB | Teacher and critic model for running Synthetic data generation (SDG) | General availability |
| Judge model | 87.0 GB | Judge model for multi-phase training and evaluation | General availability |
Using the `granite-8b-code-instruct` and `granite-8b-code-base` Large Language models (LLMS) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Models required for customizing the Granite LLM
-
The
granite-7b-starter
base LLM. -
The
mixtral-8x7b-instruct-v0-1
teacher model for SDG. -
The
prometheus-8x7b-v2-0
judge model for training and evaluation.
Additional tools required for customizing an LLM
-
The
skills-adapter-v3
LoRA layered skills adapter for SDG. -
The
knowledge-adapter-v3
LoRA layered knowledge adapter for SDG.
The LoRA layered adapters do not show up in the output of the ilab model list
command. You can see the skills-adapter-v3
and knowledge-adapter-v3
files in the ls ~/.cache/instructlab/models
folder.
The listed granite models for serving and inferencing are not currently supported for customizing.
3.1. Downloading the models from a Red Hat repository
You can download the additional optional models created by Red Hat and IBM.
Prerequisites
- You installed RHEL AI with the bootable container image.
- You initialized InstructLab.
- You created a Red Hat registry account and logged in on your machine.
- You have root user access on your machine.
Procedure
To download the additional LLM models, run the following command:
$ ilab model download --repository docker://<repository_and_model> --release <release>
where:
- <repository_and_model>
-
Specifies the repository location of the model as well as the model. You can access the models from the
registry.redhat.io/rhelai1/
repository. - <release>
-
Specifies the version of the model. Set to
1.2
for the models that are supported on RHEL AI version 1.2. Set tolatest
for the latest version of the model.
Example command
$ ilab model download --repository docker://registry.redhat.io/rhelai1/granite-7b-starter --release latest
Verification
You can view all the downloaded models, including the new models after training, on your system with the following command:
$ ilab model list
Example output
+-----------------------------------+---------------------+---------+ | Model Name | Last Modified | Size | +-----------------------------------+---------------------+---------+ | models/prometheus-8x7b-v2-0 | 2024-08-09 13:28:50 | 87.0 GB| | models/mixtral-8x7b-instruct-v0-1 | 2024-08-09 13:28:24 | 87.0 GB| | models/granite-7b-redhat-lab | 2024-08-09 14:28:40 | 12.6 GB| | models/granite-7b-starter | 2024-08-09 14:40:35 | 12.6 GB| +-----------------------------------+---------------------+---------+
You can also list the downloaded models in the
ls ~/.cache/instructlab/models
folder by running the following command:$ ls ~/.cache/instructlab/models
Example output
granite-7b-starter granite-7b-redhat-lab