此内容没有您所选择的语言版本。

Chapter 1. About AI Inference Server


AI Inference Server provides enterprise-grade stability and security, building on the open source vLLM project, which provides state-of-the-art inferencing features.

AI Inference Server uses continuous batching to process requests as they arrive instead of waiting for a full batch to be accumulated. It also uses tensor parallelism to distribute LLM workloads across multiple GPUs. These features provide reduced latency and higher throughput.

To reduce the cost of inferencing models, AI Inference Server uses paged attention. LLMs use a mechanism called attention to understand conversations with users. Normally, attention uses a significant amount of memory, much of which is wasted. Paged attention addresses this memory waste by provisioning memory for LLMs similar to the way that virtual memory works for operating systems. This approach consumes less memory and lowers costs.

To verify cost savings and performance gains with AI Inference Server, complete the following procedures:

  1. Serving and inferencing with AI Inference Server
  2. Validating Red Hat AI Inference Server benefits using key metrics
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2026 Red Hat
返回顶部