このコンテンツは選択した言語では利用できません。

Chapter 1. Version 3.3 release notes


Red Hat Enterprise Linux AI is a generative AI inference platform for Linux environments that uses Red Hat AI Inference Server for running and optimizing models, and includes Red Hat AI Model Optimization Toolkit for model quantization, sparsity, and general compression for supported AI accelerators. Red Hat AI Model Optimization Toolkit has native Hugging Face and vLLM support. You can seamlessly integrate optimized models with deployment pipelines for faster, cost-saving inference at scale, powered by the compressed-tensors model format.

Red Hat Enterprise Linux AI is packaged as a bootc container image for easy deployment on a Linux server appliance with NVIDIA CUDA or AMD ROCm AI accelerators installed. The container images are available from registry.redhat.io:

  • registry.redhat.io/rhelai3/bootc-cuda-rhel9:3.3.0
  • registry.redhat.io/rhelai3/bootc-rocm-rhel9:3.3.0
Important

There is no direct upgrade path from Red Hat Enterprise Linux AI 1.5 to Red Hat Enterprise Linux AI 3.0. You can upgrade from Red Hat Enterprise Linux AI 3.0 to 3.3 and all versions in-between.

Important

The registry.redhat.io/rhelai3/bootc-rocm-rhel9:3.3.0 image does not include Red Hat AI Model Optimization Toolkit, which is not supported for AMD ROCm AI accelerators.

1.1. New features

Red Hat Enterprise Linux AI 3.3 packages Red Hat AI Inference Server 3.3, which includes the following highlights:

New model support
Red Hat AI Inference Server 3.3 adds support for Mistral 3 models including Mixture of Experts (MoE) architecture variants, IBM Prithvi geospatial foundation models, and various other models including BAGEL, AudioFlamingo3, and JAIS 2.
New AI accelerator support
Red Hat AI Inference Server 3.3 adds support for NVIDIA B300 and GB300 Blackwell AI accelerators with CUDA 13.0, AMD Instinct MI325X AI accelerators, and CPU-only x86_64 AVX2 inference as a Technology Preview. Support for AWS Trainium and Inferentia accelerators is also available as a Technology Preview.
Performance improvements
Whisper models now run approximately 3 times faster compared to the previous release. DeepSeek-V3.1 models provide 5.3% throughput improvement and 4.4% time-to-first-token improvement.
Model optimization updates
Red Hat AI Model Optimization Toolkit adds model-free post-training quantization on safetensors files, extended KV cache and attention quantization capabilities, and the AutoRoundModifier algorithm.

For the complete list of new features, enhancements, and known issues, see the Red Hat AI Inference Server 3.3 release notes.

1.2. Known issues

There are no known issues for Red Hat Enterprise Linux AI 3.3.

Red Hat logoGithubredditYoutubeTwitter

詳細情報

試用、購入および販売

コミュニティー

Red Hat ドキュメントについて

Red Hat をお使いのお客様が、信頼できるコンテンツが含まれている製品やサービスを活用することで、イノベーションを行い、目標を達成できるようにします。 最新の更新を見る.

多様性を受け入れるオープンソースの強化

Red Hat では、コード、ドキュメント、Web プロパティーにおける配慮に欠ける用語の置き換えに取り組んでいます。このような変更は、段階的に実施される予定です。詳細情報: Red Hat ブログ.

会社概要

Red Hat は、企業がコアとなるデータセンターからネットワークエッジに至るまで、各種プラットフォームや環境全体で作業を簡素化できるように、強化されたソリューションを提供しています。

Theme

© 2026 Red Hat
トップに戻る