Este conteúdo não está disponível no idioma selecionado.
Chapter 1. Red Hat Enterprise Linux AI 1.5 release notes
RHEL AI provides organizations with a process to develop enterprise applications on open source Large Language Models (LLMs).
1.1. About this release Copiar o linkLink copiado para a área de transferência!
Red Hat Enterprise Linux AI version 1.5 includes various features for Large Language Model (LLM) fine-tuning on the Red Hat and IBM produced Granite model. A customized model using the RHEL AI workflow consisted of the following:
- Install and launch a RHEL 9.4 instance with the InstructLab tooling.
- Host information in a Git repository and interact with a Git-based taxonomy of the knowledge you want a model to learn.
- Run the end-to-end workflow of synthetic data generation (SDG), multi-phase training, and benchmark evaluation.
- Serve and chat with the newly fine-tuned LLM.
1.2. Features and Enhancements Copiar o linkLink copiado para a área de transferência!
Red Hat Enterprise Linux AI version 1.5 includes various features for Large Language Model (LLM) fine-tuning.
1.2.1. Suppored accelerators Copiar o linkLink copiado para a área de transferência!
1.2.1.1. NVIDIA H200 accelerators Copiar o linkLink copiado para a área de transferência!
You can now use NVIDIA H200 accelerators on RHEL AI version 1.5 for inference serving and running the full end-to-end workflow. When initlizing your RHEL AI environment, select a H200 profile that matches the number of accelerators in your machine. For more information on the supported hardware on RHEL AI, see Red Hat Enterprise Linux AI hardware requirements.
1.2.1.2. NVIDIA Grace Hopper GH200 accelerators (Technology Preview) Copiar o linkLink copiado para a área de transferência!
You can now use NVIDIA H200 accelerators on RHEL AI version 1.5 for inference serving in Technology Preview. RHEL AI does not include a system profile for the Grace Hopper accelerators by default. To use the GH200 accelerators, initialize your RHEL AI environment with the h200_x1
profile and add the max_startup_attempts: 1200
parameter to the config.yaml
file.
ilab config edit
$ ilab config edit
1.2.1.3. AMD MI300X accelerators Copiar o linkLink copiado para a área de transferência!
AMD MI300X accelerators are now generally available for inference serving and running the full end-to-end workflow. For more information on the supported hardware on RHEL AI, see Red Hat Enterprise Linux AI hardware requirements.
1.2.2. Installing Copiar o linkLink copiado para a área de transferência!
Red Hat Enterprise Linux AI is installable as a bootable image. This image contains various tooling for interacting with RHEL AI. The image includes: Red Hat Enterprise Linux 9.4, Python version 3.11 and InstructLab tools for model fine-tuning. For more information about installing Red Hat Enterprise Linux AI, see Installation overview and the "Installation feature tracker"
1.2.3. Building your RHEL AI environment Copiar o linkLink copiado para a área de transferência!
After installing Red Hat Enterprise Linux AI, you can set up your RHEL AI environment with the InstructLab tools.
1.2.3.1. Initializing InstructLab Copiar o linkLink copiado para a área de transferência!
You can initialize and set up your RHEL AI environment by running the ilab config init
command. This command creates the necessary configurations for interacting with RHEL AI and fine-tuning models. It also creates proper directories for your data files. For more information about initializing InstructLab, see the Initialize InstructLab documentation.
1.2.3.2. Downloading Large Language Models Copiar o linkLink copiado para a área de transferência!
You can download various Large Language Models (LLMs) provided by Red Hat to your RHEL AI machine or instance. You can download these models from a Red Hat registry after creating and logging in to your Red Hat registry account. For more information about the supported RHEL AI LLMs, see the Downloading models documentation and the "Large Language Models (LLMs) technology preview status".
1.2.3.2.1. Version 2 of the 3.1 Granite models Copiar o linkLink copiado para a área de transferência!
RHEL AI version 1.5 now supports the granite-3.1-8b-starter-v2
student model and granite-3.1-8b-lab-v2
inference model. For more information about models, see the Downloading Large Lanuage Models documentation.
1.2.3.3. Serving and chatting with models Copiar o linkLink copiado para a área de transferência!
Red Hat Enterprise Linux AI version 1.5 allows you to run a vLLM inference server on various LLMs. The vLLM tool is a memory-efficient inference and serving engine library for LLMs that is included in the RHEL AI image. For more information about serving and chatting with models, see Serving and chatting with the models documentation.
1.2.4. Creating skills and knowledge YAML files Copiar o linkLink copiado para a área de transferência!
On Red Hat Enterprise Linux AI, you can customize your taxonomy tree using custom YAML files so a model can learn domain-specific information. You host your knowledge data in a Git repository and fine-tune a model with that data. For detailed documentation on how to create a knowledge markdown and YAML file, see Customizing your taxonomy tree.
1.2.5. Generating a custom LLM using RHEL AI Copiar o linkLink copiado para a área de transferência!
You can use Red Hat Enterprise Linux AI to customize a granite starter LLM with your domain specific skills and knowledge. RHEL AI includes the LAB enhanced method of Synthetic Data Generation (SDG) and multi-phase training.
1.2.5.1. Synthetic Data Generation (SDG) Copiar o linkLink copiado para a área de transferência!
Red Hat Enterprise Linux AI includes the LAB enhanced method of synthetic data generation (SDG). You can use the qna.yaml
files with your own knowledge data to create hundreds of artifical datasets in the SDG process. For more information about running the SDG process, see Generating a new dataset with Synthetic data generation (SDG).
1.2.5.1.1. Running SDG with the llama-3.3-70B-Instruct model as a teacher model (Technology Preview) Copiar o linkLink copiado para a área de transferência!
RHEL AI version 1.5 now supports using llama-3.3-70b-Instruct
as a teacher model when running Synthetic Data Generation (SDG) as a Technology Preview. For more information, see the Using the llama-3.3-70B-Instruct model as a teacher model (Technology Preview) documentation.
1.2.5.2. Training a model with your data Copiar o linkLink copiado para a área de transferência!
Red Hat Enterprise Linux AI includes the LAB enhanced method of multi-phase training: A fine-tuning strategy where datasets are trained and evaluated in multiple phases to create the best possible model. For more details on multi-phase training, see Training your data on the model.
1.2.5.3. Benchmark evaluation Copiar o linkLink copiado para a área de transferência!
Red Hat Enterprise Linux AI includes the ability to run benchmark evaluations on the newly trained models. On your trained model, you can evaluate how well the model knows the knowledge or skills you added with the MMLU_BRANCH
or MT_BENCH_BRANCH
benchmark. For more details on benchmark evaluation, see Evaluating your new model.
1.2.6. Red Hat cross-product features Copiar o linkLink copiado para a área de transferência!
1.2.6.1. Automating RHEL AI with Ansible Automation Platform Copiar o linkLink copiado para a área de transferência!
You can now run RHEL AI workloads in playbooks using the Ansible Automation Platform hub. This includes two Ansible collections:
- infra.ai
- A content collection that can provision RHEL AI environments on various cloud providers infrastructure, including AWS, GCP, and Azure. This collection simplifies the deployment of your AI workloads across various cloud providers.
- redhat.ai
- A content collection designed to manage workloads in RHEL AI. You can use the Ansible playbook options to quickly create deployments in RHEL AI, this allows for more efficient model training and serving processes.
If you are an existing Ansible Automation Platform customer, these collections are included in your current subscription.
1.3. Red Hat Enterprise Linux AI feature tracker Copiar o linkLink copiado para a área de transferência!
Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
In the following tables, features are marked with the following statuses:
- Not Available
- Technology Preview
- General Availability
- Deprecated
- Removed
1.3.1. Installation feature tracker Copiar o linkLink copiado para a área de transferência!
Feature | 1.1 | 1.2 | 1.3 | 1.4 | 1.5 |
---|---|---|---|---|---|
Installing on bare metal | Generally available | Generally available | Generally available | Generally available | Generally available |
Installing on AWS | Generally available | Generally available | Generally available | Generally available | Generally available |
Installing on IBM Cloud | Generally available | Generally available | Generally available | Generally available | Generally available |
Installing on GCP | Not available | Technology preview | Generally available | Generally available | Generally available |
Installing on Azure | Not available | Generally available | Generally available | Generally available | Generally available |
1.3.2. Platform support feature tracker Copiar o linkLink copiado para a área de transferência!
Feature | 1.1 | 1.2 | 1.3 | 1.4 | 1.5 |
---|---|---|---|---|---|
Bare metal | Generally available | Generally available | Generally available | Generally available | Generally available |
AWS | Generally available | Generally available | Generally available | Generally available | Generally available |
IBM Cloud | Not available | Generally available | Generally available | Generally available | Generally available |
Google Cloud Platform | Not available | Technology preview | Generally available | Generally available | Generally available |
Azure | Not available | Generally available | Generally available | Generally available | Generally available |
Feature | 1.1 | 1.2 | 1.3 | 1.4 | 1.5 |
---|---|---|---|---|---|
Bare metal | Generally available | Generally available | Generally available | Generally available | Generally available |
AWS | Generally available | Generally available | Generally available | Generally available | Generally available |
IBM Cloud | Generally available | Generally available | Generally available | Generally available | Generally available |
Google Cloud Platform (GCP) | Not available | Technology preview | Generally available | Generally available | Generally available |
Azure | Not available | Generally available | Generally available | Generally available | Generally available |
Feature | 1.1 | 1.2 | 1.3 | 1.4 | 1.5 |
---|---|---|---|---|---|
AWS | Not available | Not available | Generally available | Generally available | Generally available |
Azure | Not available | Not available | Generally available | Generally available | Generally available |
1.4. Large Language Models tracker Copiar o linkLink copiado para a área de transferência!
1.4.1. RHEL AI version 1.5 hardware vendor LLM support Copiar o linkLink copiado para a área de transferência!
Feature | NVIDIA | AMD | Intel |
| Generally Available | Generally Available | Not Available |
| Generally Available | Generally Available | Not Available |
| Not Available | Not Available | Technology Preview |
| Not Available | Not Available | Technology Preview |
| Technology preview | Technology Preview | Technology Preview |
| Technology Preview | Technology Preview | Technology Preview |
| Generally Available | Generally Available | Technology Preview |
| Technology Preview | Technology Preview | Not Available |
| Generally Available | Generally Available | Not Available |
1.5. Known Issues Copiar o linkLink copiado para a área de transferência!
Running MMLU evaluation
On RHEL AI version 1.5, you need to use the --skip-server
flag when running MMLU.
Incorrect auto-detection on some NVIDIA A100 systems
RHEL AI sometimes auto-detects the incorrect system profile on machines with A100 accelerators.
You can select the correct profile by re-initializing and passing the correct system profile.
ilab config init --profile <path-to-system-profile>
$ ilab config init --profile <path-to-system-profile>
Fabric manager does not always starts with NVIDIA accelerators
After installing Red Hat Enterprise Linux AI on NVIDIA systems, you may see the following error when serving or training a model.
To resolve this issue, you need the run the following commands:
sudo systemctl stop nvidia-persistenced.service sudo systemctl start nvidia-fabricmanager.service sudo systemctl start nvidia-persistenced.service
$ sudo systemctl stop nvidia-persistenced.service
$ sudo systemctl start nvidia-fabricmanager.service
$ sudo systemctl start nvidia-persistenced.service
UI AMD technology preview installations
Red Hat Enterprise Linux AI version 1.5 currently does not support graphical based installation with the technology previewed AMD ISOs. Ensure that the text
parameter in your kickstart
file is configured for non-interactive installs. You can also pass inst.text
in your shell during interactive installation to avoid an install time crash.
SDG can fail on 4xL40s
For SDG to run on 4xL40s, you need to run SDG with the --num-cpus
flag and set to the value of 4
.
ilab data generate --num-cpus 4
$ ilab data generate --num-cpus 4
MMLU and MMLU_BRANCH on the granite-8b-starter-v1
model
When evaluating a model built from the granite-8b-starter-v1
LLM, there might an error where vLLM does not start when running the MMLU and MMLU_BRANCH benchmarks.
If vLLM does not start, add the following parameter to the serve
section of your config.yaml
file:
serve: vllm: vllm_args: [--dtype bfloat16]
serve:
vllm:
vllm_args: [--dtype bfloat16]
Kdump over nfs
Red Hat Enterprise Linux AI version 1.5 does not support kdump over nfs without configuration. To use this feature, run the following commands:
mkdir -p /var/lib/kdump/dracut.conf.d echo "dracutmodules=''" > /var/lib/kdump/dracut.conf.d/99-kdump.conf echo "omit_dracutmodules=''" >> /var/lib/kdump/dracut.conf.d/99-kdump.conf echo "dracut_args --confdir /var/lib/kdump/dracut.conf.d --install /usr/lib/passwd --install /usr/lib/group" >> /etc/kdump.conf systemctl restart kdump
mkdir -p /var/lib/kdump/dracut.conf.d
echo "dracutmodules=''" > /var/lib/kdump/dracut.conf.d/99-kdump.conf
echo "omit_dracutmodules=''" >> /var/lib/kdump/dracut.conf.d/99-kdump.conf
echo "dracut_args --confdir /var/lib/kdump/dracut.conf.d --install /usr/lib/passwd --install /usr/lib/group" >> /etc/kdump.conf
systemctl restart kdump
1.6. Asynchronous z-stream updates Copiar o linkLink copiado para a área de transferência!
Security, bug fix, and enhancement updates for RHEL AI 1.5 are released as asynchronous z-stream updates.
This section will continue to be updated over time to provide notes on enhancements and bug fixes for future asynchronous z-stream releases of RHEL AI 1.5. Versioned asynchronous releases, for example with the form RHEL AI 1.5.z, will be detailed in subsections.
1.6.1. Red Hat Enterprise Linux AI 1.5.1 features and bug fixes Copiar o linkLink copiado para a área de transferência!
Issued: 11 June 2025
Red Hat Enterprise Linux AI release 1.5.1 is now available. This release includes bug fixes and product enhancements.
1.6.1.1. Features Copiar o linkLink copiado para a área de transferência!
RHEL AI 1.5.1 and further 1.5.z releases support Intel Gaudi 3 accelerators for inference serving models. You can download the Red Hat Enterprise Linux AI image on the Download Red Hat Enterprise Linux AI page and deploy RHEL AI on a machine with Gaudi3 accelerators.
1.6.1.2. Upgrade Copiar o linkLink copiado para a área de transferência!
To update your RHEL AI system to the most recent z-stream version, you must be logged in to the Red Hat registry and run the following command:
sudo bootc upgrade --apply
$ sudo bootc upgrade --apply
1.6.2. Red Hat Enterprise Linux AI 1.5.2 features and bug fixes Copiar o linkLink copiado para a área de transferência!
Issued: 24 June 2025
Red Hat Enterprise Linux AI release 1.5.2 is now available. This release includes bug fixes and product enhancements.
1.6.2.1. Upgrade Copiar o linkLink copiado para a área de transferência!
To update your RHEL AI system to the most recent z-stream version, you must be logged in to the Red Hat registry and run the following command:
sudo bootc upgrade --apply
$ sudo bootc upgrade --apply
1.6.3. Red Hat Enterprise Linux AI 1.5.3 features and bug fixes Copiar o linkLink copiado para a área de transferência!
Issued: 02 September 2025
Red Hat Enterprise Linux AI release 1.5.3 is now available. This release includes bug fixes and product enhancements.
1.6.3.1. Bug Fix Copiar o linkLink copiado para a área de transferência!
Previously, InstructLab was incorrectly detecting the chat template when serving the granite-3.1-8b-lab-v2.2
model on 8xL4 NVIDIA accelerators. This causes serving to fail when attempting to serve the granite-3.1-8b-lab-v2.2
model. The RHEL AI version 1.5.3 now uses the correct chat template and results in successful vLLM deployments on 8xL4 NVIDIA accelerators
1.6.3.2. Upgrade Copiar o linkLink copiado para a área de transferência!
To update your RHEL AI system to the most recent z-stream version, you must be logged in to the Red Hat registry and run the following command:
sudo bootc upgrade --apply
$ sudo bootc upgrade --apply