Building your RHEL AI environment
Creating accounts, initalizing RHEL AI, downloading models, and serving/chat customizations
Abstract
Chapter 1. Configuring accounts for RHEL AI Copy linkLink copied to clipboard!
There are a few accounts you need to set up before interacting with RHEL AI.
- Creating a Red Hat account
- You can create a Red Hat account by registering on the Red Hat website. You can follow the procedure in Register for a Red Hat account.
- Creating a Red Hat registry account
Before you can download models from the Red Hat registry, you need to create a registry account and login using the CLI. You can view your account username and password by selecting the Regenerate Token button on the webpage.
- You can create a Red Hat registry account by selecting the New Service Account button on the Registry Service Accounts page.
- There are several ways you can log into your registry account via the CLI. Follow the procedure in Red Hat Container Registry authentication to login on your machine.
- Optional: Configuring Red Hat Insights for hybrid cloud deployments
Red Hat Insights is an offering that gives you visibility to the environments you are deploying. This platform can also help identify operational and vulnerability risks in your system. For more information about Red Hat Insights, see Red Hat Insights data and application security.
You can create a Red Hat Insights account using an activation key and organization parameters by following the procedure in Viewing an activation key.
You can then configure your account on your machine by running the following command:
$ rhc connect --organization <org id> --activation-key <created key>To run RHEL AI in a disconnected environment, or opt out of Red Hat Insights, run the following commands:
$ sudo mkdir -p /etc/ilab $ sudo touch /etc/ilab/insights-opt-out
Chapter 2. Initializing InstructLab Copy linkLink copied to clipboard!
You must initialize the InstructLab environments to begin working with the Red Hat Enterprise Linux AI models.
2.1. Creating your RHEL AI environment Copy linkLink copied to clipboard!
You can start interacting with LLM models and the RHEL AI tooling by initializing the InstructLab environment.
Prerequisites
- You installed RHEL AI with the bootable container image.
- You have root user access on your machine.
Procedure
Optional: To set up training profiles, you need to know the GPU accelerators in your machine. You can view your system information by running the following command:
$ ilab system infoInitialize InstructLab by running the following command:
$ ilab config initThe CLI prompts you to setup your
config.yaml.Example output
Welcome to InstructLab CLI. This guide will help you to setup your environment. Please provide the following values to initiate the environment [press Enter for defaults]: Generating `/home/<example-user>/.config/instructlab/config.yaml` and `/home/<example-user>/.local/share/instructlab/internal/train_configuration/profiles`...Follow the CLI prompts to set up your training hardware configurations. This updates your
config.yamlfile and adds the propertrainconfigurations for training an LLM model. Type the number of the YAML file that matches your hardware specifications.ImportantThese profiles only add the necessary configurations to the
trainsection of yourconfig.yamlfile, therefore any profile can be selected for inference serving a model.Example output of selecting training profiles
Please choose a train profile to use: [0] No profile (CPU-only) [1] A100_H100_x2.yaml [2] A100_H100_x4.yaml [3] A100_H100_x8.yaml [4] L40_x4.yaml [5] L40_x8.yaml [6] L4_x8.yaml Enter the number of your choice [hit enter for the default CPU-only profile] [0]:Example output of a completed
ilab config initrun.You selected: A100_H100_x8.yaml Initialization completed successfully, you're ready to start using `ilab`. Enjoy!Configuring your system’s GPU for inference serving: This step is only required if you are using Red Hat Enterprise Linux AI exclusively for inference serving.
Edit your
config.yamlfile by running the following command:$ ilab config editIn the
evaluatesection of the configurations file, edit thegpus:parameter and add the number of accelerators on your machine.evaluate: base_branch: null base_model: ~/.cache/instructlab/models/granite-7b-starter branch: null gpus: <num-gpus>In the
vllmsection of theservefield in the configuration file, edit thegpus:andvllm_args: ["--tensor-parallel-size"]parameters and add the number of accelerators on your machine.serve: backend: vllm chat_template: auto host_port: 127.0.0.1:8000 llama_cpp: gpu_layers: -1 llm_family: '' max_ctx_size: 4096 model_path: ~/.cache/instructlab/models/granite-7b-redhat-lab vllm: llm_family: '' vllm_args: ["--tensor-parallel-size", "<num-gpus>"] gpus: <num-gpus>
If you want to use the skeleton taxonomy tree, which includes two skills and one knowledge
qna.yamlfile, you can clone the skeleton repository and place it in thetaxonomydirectory by running the following command:rm -rf ~/.local/share/instructlab/taxonomy/ ; git clone https://github.com/RedHatOfficial/rhelai-sample-taxonomy.git ~/.local/share/instructlab/taxonomy/Directory structure of the InstructLab environment
├─ ~/.cache/instructlab/models/1 ├─ ~/.local/share/instructlab/datasets2 ├─ ~/.local/share/instructlab/taxonomy3 ├─ ~/.local/share/instructlab/phased/<phase1-or-phase2>/checkpoints/4 - 1
~/.cache/instructlab/models/: Contains all downloaded large language models, including the saved output of ones you generate with RHEL AI.- 2
~/.local/share/instructlab/datasets/: Contains data output from the SDG phase, built on modifications to the taxonomy repository.- 3
~/.local/share/instructlab/taxonomy/: Contains the skill and knowledge data.- 4
~/.local/share/instructlab/phased/<phase1-or-phase2>/checkpoints/: Contains the output of the multi-phase training process
Verification
You can view the full
config.yamlfile by running the following command$ ilab config showYou can also manually edit the
config.yamlfile by running the following command:$ ilab config edit
Chapter 3. Downloading models Copy linkLink copied to clipboard!
Red Hat Enterprise Linux AI allows you to customize or chat with various Large Language Models (LLMs) provided and built by Red Hat and IBM. You can download these models from the Red Hat RHEL AI registry.
| Large Language Models (LLMs) | Type | Size | Purpose | Support |
|---|---|---|---|---|
|
| Base model | 12.6 GB | Base model for customizing, training and fine-tuning | General availability |
|
| LAB fine-tuned granite model | 12.6 GB | Granite model for serving and inferencing | General availability |
|
| LAB fine-tuned granite code model | 15.0 GB | LAB fine-tuned granite code model for serving and inferencing | Technology preview |
|
| Granite fine-tuned code model | 15.0 GB | Granite code model for serving and inferencing | Technology preview |
|
| Teacher/critic model | 87.0 GB | Teacher and critic model for running Synthetic data generation (SDG) | General availability |
|
| Judge model | 87.0 GB | Judge model for multi-phase training and evaluation | General availability |
Using the `granite-8b-code-instruct` and `granite-8b-code-base` Large Language models (LLMS) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Models required for customizing the Granite LLM
-
The
granite-7b-starterbase LLM. -
The
mixtral-8x7b-instruct-v0-1teacher model for SDG. -
The
prometheus-8x7b-v2-0judge model for training and evaluation.
Additional tools required for customizing an LLM
-
The
skills-adapter-v3LoRA layered skills adapter for SDG. -
The
knowledge-adapter-v3LoRA layered knowledge adapter for SDG.
The LoRA layered adapters do not show up in the output of the ilab model list command. You can see the skills-adapter-v3 and knowledge-adapter-v3 files in the ls ~/.cache/instructlab/models folder.
The listed granite models for serving and inferencing are not currently supported for customizing.
3.1. Downloading the models from a Red Hat repository Copy linkLink copied to clipboard!
You can download the additional optional models created by Red Hat and IBM.
Prerequisites
- You installed RHEL AI with the bootable container image.
- You initialized InstructLab.
- You created a Red Hat registry account and logged in on your machine.
- You have root user access on your machine.
Procedure
To download the additional LLM models, run the following command:
$ ilab model download --repository docker://<repository_and_model> --release 1.1where:
- <repository_and_model>
-
Specifies the repository location of the model as well as the model. You can access the models from the
registry.redhat.io/rhelai1/repository. - <release>
-
Specifies the version of the model. Set to
1.1for the models that are supported on RHEL AI version 1.1.
Example command
$ ilab model download --repository docker://registry.redhat.io/rhelai1/granite-7b-starter --release latest
Verification
You can view all the downloaded models, including the new models after training, on your system with the following command:
$ ilab model listExample output
+-----------------------------------+---------------------+---------+ | Model Name | Last Modified | Size | +-----------------------------------+---------------------+---------+ | models/prometheus-8x7b-v2-0 | 2024-08-09 13:28:50 | 87.0 GB| | models/mixtral-8x7b-instruct-v0-1 | 2024-08-09 13:28:24 | 87.0 GB| | models/granite-7b-redhat-lab | 2024-08-09 14:28:40 | 12.6 GB| | models/granite-7b-starter | 2024-08-09 14:40:35 | 12.6 GB| +-----------------------------------+---------------------+---------+You can also list the downloaded models in the
ls ~/.cache/instructlab/modelsfolder by running the following command:$ ls ~/.cache/instructlab/modelsExample output
granite-7b-starter granite-7b-redhat-lab
Chapter 4. Serving and chatting with the models Copy linkLink copied to clipboard!
To interact with various models on Red Hat Enterprise Linux AI you must serve the model, which hosts it on a server, then you can chat with the models.
4.1. Serving the model Copy linkLink copied to clipboard!
To interact with the models, you must first activate the model in a machine through serving. The ilab model serve commands starts a vLLM server that allows you to chat with the model.
Prerequisites
- You installed RHEL AI with the bootable container image.
- You initialized InstructLab.
- You installed your preferred Granite LLMs.
- You have root user access on your machine.
Procedure
If you do not specify a model, you can serve the default model,
granite-7b-redhat-lab, by running the following command:$ ilab model serveTo serve a specific model, run the following command
$ ilab model serve --model-path <model-path>Example command
$ ilab model serve --model-path ~/.cache/instructlab/models/granite-7b-code-instructExample output of when the model is served and ready
INFO 2024-03-02 02:21:11,352 lab.py:201 Using model 'models/granite-7b-code-instruct' with -1 gpu-layers and 4096 max context size. Starting server process After application startup complete see http://127.0.0.1:8000/docs for API. Press CTRL+C to shut down the server.
4.1.1. Optional: Running ilab model serve as a service Copy linkLink copied to clipboard!
You can set up a systemd service so that the ilab model serve command runs as a running service. The systemd service runs the ilab model serve command in the background and restarts if it crashes or fails. You can configure the service to start upon system boot.
Prerequisites
- You installed the Red Hat Enterprise Linux AI image on bare metal.
- You initialized InstructLab
- You downloaded your preferred Granite LLMs.
- You have root user access on your machine.
Procedure.
Create a directory for your
systemduser service by running the following command:$ mkdir -p $HOME/.config/systemd/userCreate your
systemdservice file with the following example configurations:$ cat << EOF > $HOME/.config/systemd/user/ilab-serve.service [Unit] Description=ilab model serve service [Install] WantedBy=multi-user.target default.target1 [Service] ExecStart=ilab model serve --model-family granite Restart=always EOF- 1
- Specifies to start by default on boot.
Reload the
systemdmanager configuration by running the following command:$ systemctl --user daemon-reloadStart the
ilab model servesystemdservice by running the following command:$ systemctl --user start ilab-serve.serviceYou can check that the service is running with the following command:
$ systemctl --user status ilab-serve.serviceYou can check the service logs by running the following command:
$ journalctl --user-unit ilab-serve.serviceTo allow the service to start on boot, run the following command:
$ sudo loginctl enable-lingerOptional: There are a few optional commands you can run for maintaining your
systemdservice.You can stop the ilab-serve system service by running the following command:
$ systemctl --user stop ilab-serve.service-
You can prevent the service from starting on boot by removing the
"WantedBy=multi-user.target default.target"from the$HOME/.config/systemd/user/ilab-serve.servicefile.
4.2. Chatting with the model Copy linkLink copied to clipboard!
Once you serve your model, you can now chat with the model.
The model you are chatting with must match the model you are serving. With the default config.yaml file, the granite-7b-redhat-lab model is the default for serving and chatting.
Prerequisites
- You installed RHEL AI with the bootable container image.
- You initialized InstructLab.
- You downloaded your preferred Granite LLMs.
- You are serving a model.
- You have root user access on your machine.
Procedure
- Since you are serving the model in one terminal window, you must open another terminal to chat with the model.
To chat with the default model, run the following command:
$ ilab model chatTo chat with a specific model run the following command:
$ ilab model chat --model <model-path>Example command
$ ilab model chat --model ~/.cache/instructlab/models/granite-7b-code-instruct
Example output of the chatbot
$ ilab model chat
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────── system ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Welcome to InstructLab Chat w/ GRANITE-7B-CODE-INSTRUCT (type /h for help) │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
>>> [S][default]
+ Type exit to leave the chatbot.
4.2.1. Optional: Creating an API key for model chatting Copy linkLink copied to clipboard!
By default, the ilab CLI does not use authentication. If you want to expose your server to the internet, you can create a API key that connects to your server with the following procedures.
Prerequisites
- You installed the Red Hat Enterprise Linux AI image on bare metal.
- You initialized InstructLab
- You downloaded your preferred Granite LLMs.
- You have root user access on your machine.
Procedure
Create a API key that is held in
$VLLM_API_KEYparameter by running the following command:$ export VLLM_API_KEY=$(python -c 'import secrets; print(secrets.token_urlsafe())')You can view the API key by running the following command:
$ echo $VLLM_API_KEYUpdate the
config.yamlby running the following command:$ ilab config editAdd the following parameters to the
vllm_argssection of yourconfig.yamlfile.serve: vllm: vllm_args: - --api-key - <api-key-string>where
- <api-key-string>
- Specify your API key string.
You can verify that the server is using API key authentication by running the following command:
$ ilab model chatThen, seeing the following error that shows an unauthorized user.
openai.AuthenticationError: Error code: 401 - {'error': 'Unauthorized'}Verify that your API key is working by running the following command:
$ ilab chat -m granite-7b-redhat-lab --endpoint-url https://inference.rhelai.com/v1 --api-key $VLLM_API_KEYExample output
$ ilab model chat ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────── system ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ Welcome to InstructLab Chat w/ GRANITE-7B-LAB (type /h for help) │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ >>> [S][default]
4.2.2. Optional: Allowing chat access to a model from a secure endpoint Copy linkLink copied to clipboard!
You can serve an inference endpoint and allow others to interact with models provided with Red Hat Enterprise Linux AI on secure connections by creating a systemd service and setting up a nginx reverse proxy that exposes a secure endpoint. This allows you to share the secure endpoint with others so they can chat with the model over a network.
The following procedure uses self-signed certifications, but it is recommended to use certificates issued by a trusted Certificate Authority (CA).
The following procedure is supported only on bare metal platforms.
Prerequisites
- You installed the Red Hat Enterprise Linux AI image on bare-metal.
- You initialized InstructLab
- You downloaded your preferred Granite LLMs.
- You have root user access on your machine.
Procedure
Create a directory for your certificate file and key by running the following command:
$ mkdir -p `pwd`/nginx/ssl/Create an OpenSSL configuration file with the proper configurations by running the following command:
$ cat > openssl.cnf <<EOL [ req ] default_bits = 2048 distinguished_name = <req-distinguished-name>1 x509_extensions = v3_req prompt = no [ req_distinguished_name ] C = US ST = California L = San Francisco O = My Company OU = My Division CN = rhelai.redhat.com [ v3_req ] subjectAltName = <alt-names>2 basicConstraints = critical, CA:true subjectKeyIdentifier = hash authorityKeyIdentifier = keyid:always,issuer [ alt_names ] DNS.1 = rhelai.redhat.com3 DNS.2 = www.rhelai.redhat.com4 Generate a self signed certificate with a Subject Alternative Name (SAN) enabled with the following commands:
$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout `pwd`/nginx/ssl/rhelai.redhat.com.key -out `pwd`/nginx/ssl/rhelai.redhat.com.crt -config openssl.cnf$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyoutCreate the Nginx Configuration file and add it to the
`pwd/nginx/conf.d` by running the following command:mkdir -p `pwd`/nginx/conf.d echo 'server { listen 8443 ssl; server_name <rhelai.redhat.com>1 ssl_certificate /etc/nginx/ssl/rhelai.redhat.com.crt; ssl_certificate_key /etc/nginx/ssl/rhelai.redhat.com.key; location / { proxy_pass http://127.0.0.1:8000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } ' > `pwd`/nginx/conf.d/rhelai.redhat.com.conf- 1
- Specify the name of your server. In the example, the server name is
rhelai.redhat.com
Run the Nginx container with the new configurations by running the following command:
$ podman run --net host -v `pwd`/nginx/conf.d:/etc/nginx/conf.d:ro,Z -v `pwd`/nginx/ssl:/etc/nginx/ssl:ro,Z nginxIf you want to use port 443, you must run the
podman runcommand as a root user..You can now connect to a serving ilab machine using a secure endpoint URL. Example command:
$ ilab chat -m /instructlab/instructlab/granite-7b-redhat-lab --endpoint-url https://rhelai.redhat.com:8443/v1Optional: You can also get the server certificate and append it to the Certifi CA Bundle
Get the server certificate by running the following command:
$ openssl s_client -connect rhelai.redhat.com:8443 </dev/null 2>/dev/null | openssl x509 -outform PEM > server.crtCopy the certificate to you system’s trusted CA storage directory and update the CA trust store with the following commands:
$ sudo cp server.crt /etc/pki/ca-trust/source/anchors/$ sudo update-ca-trustYou can append your certificate to the Certifi CA bundle by running the following command:
$ cat server.crt >> $(python -m certifi)You can now run
ilab model chatwith a self-signed certificate. Example command:$ ilab chat -m /instructlab/instructlab/granite-7b-redhat-lab --endpoint-url https://rhelai.redhat.com:8443/v1