このコンテンツは選択した言語では利用できません。
Chapter 2. Version 3.3.1 release notes
Red Hat AI Inference Server 3.3.1 is a maintenance release containing security fixes, bug fixes, and minor enhancements.
The following container images are available from registry.redhat.io:
-
registry.redhat.io/rhaiis/vllm-cuda-rhel9:3.3.1 -
registry.redhat.io/rhaiis/model-opt-cuda-rhel9:3.3.1 -
registry.redhat.io/rhaiis/vllm-rocm-rhel9:3.3.1
2.1. Security fixes リンクのコピーリンクがクリップボードにコピーされました!
リンクのコピーリンクがクリップボードにコピーされました!
Red Hat AI Inference Server 3.3.1 addresses the following CVEs:
2.2. Bug fixes リンクのコピーリンクがクリップボードにコピーされました!
リンクのコピーリンクがクリップボードにコピーされました!
- Streaming tool calls with Mistral models returned invalid JSON
-
Streaming tool calls using Mistral models through the Anthropic-compatible
/v1/messagesendpoint returned invalid JSON, preventing clients from parsing responses. The JSON serialization for streaming tool calls is corrected. Mistral models using OpenAI-compatible endpoints and non-Mistral models using the/v1/messagesendpoint were not affected.
- Quantized Llama-4 models produced incorrect results
- Quantization scales were not permuted correctly for attention layers in Llama-4 models, causing accuracy collapse in models such as Llama-Guard-4-12B when attention layers were quantized. Quantization scales are permuted correctly for Llama-4 attention layers.
- Encoder models failed on AMD ROCm AI accelerators
- The Triton attention backend did not support encoder self-attention, causing encoder-only models, encoder-decoder models, and Whisper speech-to-text models to fail on AMD ROCm AI accelerators. The Triton attention backend supports encoder self-attention.
- GPT-OSS models returned empty content in multi-turn conversations
-
The reasoning parser incorrectly matched markers from previous messages in multi-turn conversations, causing GPT-OSS models to return
content: nullwhen usingjson_objectresponse format. The reasoning parser correctly handles multi-turn conversations.
- Malformed json_schema requests returned HTTP 500 instead of HTTP 400
-
When
response_formattype wasjson_schemabut thejson_schemafield was missing, an assertion error caused the server to return HTTP 500 Internal Server Error instead of HTTP 400 Bad Request. The server validates thejson_schemafield before processing and returns the correct HTTP 400 Bad Request error.
- Large Mixture of Experts models crashed due to integer overflow
- An int32 overflow in fused MoE stride computation caused large Mixture of Experts models to crash or produce silent data corruption. The stride computation uses overflow-safe arithmetic.
- Cascade attention caused numerical instability
- Cascade attention was enabled by default and caused numerical instability in some workloads, resulting in unreliable model outputs. Cascade attention is disabled by default to match the upstream vLLM configuration.
2.3. Enhancements リンクのコピーリンクがクリップボードにコピーされました!
リンクのコピーリンクがクリップボードにコピーされました!
- BART encoder-decoder model support for CUDA AI accelerators
- The BART plugin enables inference serving for BART-based summarization and translation models on CUDA AI accelerators.
- Llama-Nemotron embedding model support
-
The
llama-nemotron-embed-1b-v2embedding model is supported for inference serving.
- Custom encoder support for classification models
- Classification models can use custom encoders for improved inference performance.