이 콘텐츠는 선택한 언어로 제공되지 않습니다.

Chapter 2. New features and enhancements


  • New versions of vLLM and LLM Compressor are included in this release:

    • vLLM v0.9.0.1

      • 900+ upstream commits since vLLM v0.8.4. New features include: FP8 fused Mixture of Experts (MoE) kernels, 14 new models supported, /server_info endpoint, dynamic LoRA hot reload.
    • LLM Compressor v0.5.1
  • The Red Hat AI Inference Server container base is now built on PyTorch 2.7 and Triton 3.2.
  • Red Hat AI Inference Server is now fully supported on FIPS-compliant Red Hat Enterprise Linux (RHEL) hosts.
  • The Red Hat AI Inference Server supported product and hardware configurations have been expanded. For more information, see Supported product and hardware configurations.
Expand
Table 2.1. AI accelerator performance highlights
FeatureBenefitSupported GPUs

Blackwell support

Runs on NVIDIA B200 compute capability 10.0 GPUs with FP8 kernels and full CUDA Graph acceleration

NVIDIA Blackwell

FP8 KV-cache on ROCm

Roughly twice as large context windows with no accuracy loss

All AMD GPUs

Skinny GEMMs

Roughly 10% lower inference latency

AMD MI300X

Full CUDA Graph mode

6–8% improved average Time Per Output Token (TPOT) for small models.

NVIDIA A100 and H100

Auto FP16 fallback

Stable runs on pre-Ampere cards without manual flags, for example, NVIDIA T4 GPUs

Older NVIDIA GPUs

2.1. New models enabled

Red Hat AI Inference Server 3.1 expands capabilities by enabling the following models:

  • Added in vLLM version 0.8.5:

    • Qwen3 and Qwen3MoE
    • ModernBERT
    • Granite Speech
    • PLaMo2
    • Kimi-VL
    • Snowflake Arctic Embed
  • Added in vLLM version 0.9.0:

    • MiMo-7B
    • MiniMax-VL-01
    • Ovis 1.6, Ovis 2
    • Granite 4
    • FalconH1
    • LlamaGuard4

2.2. New developer features

/server_info REST endpoint
Query model, KV cache, and device settings for observability and automation.
Dynamic LoRA hot reload
Swap fine-tuned adapters from a URL with zero downtime.
vllm-bench CLI
"Ship-in-container" tool for instant latency and throughput sizing.
Faster incremental detokenization
Streaming responses start twice as fast on CUDA and ROCm GPUs.
torch.compile caching
Cached first prompt compilation shortens warm-up times across host restarts.

2.3. New operational features

Lower total cost of ownership (TCO)
FP8/INT8 kernels and skinny GEMMs allow the same GPUs serve more tokens per second.
Larger models on AMD GPUs
ROCm now matches CUDA for FP8 and fused MoE model performance, making AMD MI300X a first-class deployment target.
Operational agility
LoRA hot swap and /server_info endpoints allow for continuous integration and deployment for model fine-tuning without pod restarts.
맨 위로 이동
Red Hat logoGithubredditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다. 최신 업데이트를 확인하세요.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

Theme

© 2025 Red Hat