Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 2. Version 3.2.4 release notes


Red Hat AI Inference Server 3.2.4 provides container images that optimize inferencing with large language models (LLMs) for NVIDIA CUDA, AMD ROCm, Google TPU, and IBM Spyre AI accelerators. The following container images are Generally Available (GA) from registry.redhat.io:

  • registry.redhat.io/rhaiis/vllm-cuda-rhel9:3.2.4
  • registry.redhat.io/rhaiis/vllm-rocm-rhel9:3.2.4
  • registry.redhat.io/rhaiis/vllm-spyre-rhel9:3.2.4
  • registry.redhat.io/rhaiis/model-opt-cuda-rhel9:3.2.4

The following container image is a Technology Preview feature:

  • registry.redhat.io/rhaiis/vllm-tpu-rhel9:3.2.4

    Important

    The rhaiis/vllm-tpu-rhel9:3.2.4 container is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

    For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Note

To facilitate customer testing of new models, early access fast release Red Hat AI Inference Server images are available in near-upstream preview builds. Fast release container images are not functionally complete or production-ready, have minimal productization, and are not supported by Red Hat in any way.

You can find available fast release images in the Red Hat ecosystem catalog.

The Red Hat AI Inference Server supported product and hardware configurations have been expanded. For more information, see Supported product and hardware configurations.

2.1. New Red Hat AI Inference Server developer features

Red Hat AI Inference Server 3.2.4 packages the upstream vLLM v0.11.0 release. This is unchanged from the Red Hat AI Inference Server 3.2.3 release. See the Version 3.2.3 release notes for more information.

Red Hat AI Model Optimization Toolkit 3.2.4 packages the upstream LLM Compressor v0.8.1 release. This is unchanged from the Red Hat AI Inference Server 3.2.3 release. See the Version 3.2.3 release notes for more information.

2.3. Known issues

  • The FlashInfer kernel sampler was disabled by default in Red Hat AI Inference Server 3.2.3 to address non-deterministic behavior and correctness errors in model output.

    This change affects sampling behavior when using Flashinfer top-p and top-k sampling methods. If required, you can enable FlashInfer by setting the VLLM_USE_FLASHINFER_SAMPLER environment variable at runtime:

    VLLM_USE_FLASHINFER_SAMPLER=1
    Copy to Clipboard Toggle word wrap
  • AMD ROCm AI accelerators do not support inference serving encoder-decoder models when using the vLLM v1 inference engine.

    Encoder-decoder model architectures cause NotImplementedError failures with AMD ROCm accelerators. ROCm attention backends support decoder-only attention only.

    Affected models include, but are not limited to, the following:

    • Speech-to-text Whisper models, for example openai/whisper-large-v3-turbo and mistralai/Voxtral-Mini-3B-2507
    • Vision-language models, for example microsoft/Phi-3.5-vision-instruct
    • Translation models, for example T5, BART, MarianMT
    • Any models using cross-attention or an encoder-decoder architecture
Retour au début
Red Hat logoGithubredditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance. Découvrez nos récentes mises à jour.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez le Blog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

Theme

© 2025 Red Hat