Chapter 1. Version 3.2.4 release notes
Red Hat AI Inference Server 3.2.4 provides container images that optimize inferencing with large language models (LLMs) for NVIDIA CUDA, AMD ROCm, Google TPU, and IBM Spyre AI accelerators. The following container images are Generally Available (GA) from registry.redhat.io:
-
registry.redhat.io/rhaiis/vllm-cuda-rhel9:3.2.4 -
registry.redhat.io/rhaiis/vllm-rocm-rhel9:3.2.4 -
registry.redhat.io/rhaiis/vllm-spyre-rhel9:3.2.4 -
registry.redhat.io/rhaiis/model-opt-cuda-rhel9:3.2.4
The following container image is a Technology Preview feature:
registry.redhat.io/rhaiis/vllm-tpu-rhel9:3.2.4ImportantThe
rhaiis/vllm-tpu-rhel9:3.2.4container is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
To facilitate customer testing of new models, early access fast release Red Hat AI Inference Server images are available in near-upstream preview builds. Fast release container images are not functionally complete or production-ready, have minimal productization, and are not supported by Red Hat in any way.
You can find available fast release images in the Red Hat ecosystem catalog.
The Red Hat AI Inference Server supported product and hardware configurations have been expanded. For more information, see Supported product and hardware configurations.
1.1. New Red Hat AI Inference Server developer features Copy linkLink copied to clipboard!
Red Hat AI Inference Server 3.2.4 packages the upstream vLLM v0.11.0 release. This is unchanged from the Red Hat AI Inference Server 3.2.3 release. See the Version 3.2.3 release notes for more information.
1.2. New Red Hat AI Model Optimization Toolkit developer features Copy linkLink copied to clipboard!
Red Hat AI Model Optimization Toolkit 3.2.4 packages the upstream LLM Compressor v0.8.1 release. This is unchanged from the Red Hat AI Inference Server 3.2.3 release. See the Version 3.2.3 release notes for more information.
1.3. Known issues Copy linkLink copied to clipboard!
The FlashInfer kernel sampler was disabled by default in Red Hat AI Inference Server 3.2.3 to address non-deterministic behavior and correctness errors in model output.
This change affects sampling behavior when using Flashinfer top-p and top-k sampling methods. If required, you can enable FlashInfer by setting the
VLLM_USE_FLASHINFER_SAMPLERenvironment variable at runtime:VLLM_USE_FLASHINFER_SAMPLER=1
VLLM_USE_FLASHINFER_SAMPLER=1Copy to Clipboard Copied! Toggle word wrap Toggle overflow AMD ROCm AI accelerators do not support inference serving encoder-decoder models when using the vLLM v1 inference engine.
Encoder-decoder model architectures cause
NotImplementedErrorfailures with AMD ROCm accelerators. ROCm attention backends support decoder-only attention only.Affected models include, but are not limited to, the following:
- Speech-to-text Whisper models, for example openai/whisper-large-v3-turbo and mistralai/Voxtral-Mini-3B-2507
- Vision-language models, for example microsoft/Phi-3.5-vision-instruct
- Translation models, for example T5, BART, MarianMT
- Any models using cross-attention or an encoder-decoder architecture