此内容没有您所选择的语言版本。
Chapter 1. Version 3.2 release notes
Red Hat Enterprise Linux AI is a generative AI inference platform for Linux environments that uses Red Hat AI Inference Server for running and optimizing models, and includes Red Hat AI Model Optimization Toolkit for model quantization, sparsity, and general compression for supported AI accelerators. Red Hat AI Model Optimization Toolkit has native Hugging Face and vLLM support. You can seamlessly integrate optimized models with deployment pipelines for faster, cost-saving inference at scale, powered by the compressed-tensors model format.
Red Hat Enterprise Linux AI is packaged as a bootc container image for easy deployment on a Linux server appliance with NVIDIA CUDA or AMD ROCm AI accelerators installed. The container images are available from registry.redhat.io:
-
registry.redhat.io/rhelai3/bootc-cuda-rhel9:3.2.0 -
registry.redhat.io/rhelai3/bootc-rocm-rhel9:3.2.0
There is no direct upgrade path from Red Hat Enterprise Linux AI 1.5 to Red Hat Enterprise Linux AI 3.0. You can upgrade from Red Hat Enterprise Linux AI 3.0 to 3.2.
The registry.redhat.io/rhelai3/bootc-rocm-rhel9:3.2.0 image does not include Red Hat AI Model Optimization Toolkit, which is not supported for AMD ROCm AI accelerators.
1.1. New features 复制链接链接已复制到粘贴板!
- Red Hat Enterprise Linux AI 3.2 now includes support for AMD ROCm AI accelerators.
1.2. Known issues 复制链接链接已复制到粘贴板!
There are no known issues for Red Hat Enterprise Linux AI 3.2.