此内容没有您所选择的语言版本。

Chapter 1. About RHEL AI


Red Hat Enterprise Linux AI is a portable bootc image built on Red Hat Enterprise Linux (RHEL) that you can use for inference serving large language models (LLMs) in the cloud or on bare metal. RHEL AI leverages the upstream vLLM project that provides state-of-the-art inferencing and model compression features. RHEL AI is validated and certified as part of the Red Hat AI portfolio.

Red Hat Enterprise Linux AI integrates the following Red Hat AI features:

Red Hat AI Inference Server
Run your choice of models across accelerators and Linux environments.
Red Hat AI Model Optimization Toolkit
Compress models to optimize AI accelerators and compute, reducing compute costs while maintaining high model accuracy.
Pre-optimized validated models
With Red Hat Enterprise Linux AI, you have access to a collection of near-upstream optimized models ready for inference deployment with support for vLLM and validated hardware.
返回顶部
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2025 Red Hat