此内容没有您所选择的语言版本。

Chapter 1. About Speculators


Speculators is a unified library for building, training, and storing speculative decoding algorithms for large language model (LLM) inference, including frameworks such as vLLM.

Important

Speculators is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Speculative decoding is an optimization technique that improves inference performance for the LLM you are trying to serve. Red Hat AI Inference Server supports Eagle 3, a speculative decoding algorithm that uses a small, single-layer draft model and a full-sized 'verifier' model, which is the LLM you are serving. The Eagle 3 speculator model auto-regressively predicts several tokens, and then the verifier model processes these tokens in parallel. As the verifier model can accept multiple tokens per forward pass, effective throughput increases. When the verifier model rejects a token, it samples a corrected token from its own distribution, ensuring the output matches what it would produce alone.

Speculative decoding provides the following advantages:

  • Latency decreases through parallel token validation.
  • Eagle 3 speculator models require minimal processing due to their small size.
  • Output quality matches what the verifier model would produce alone.
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

关于红帽文档

Legal Notice

Theme

© 2026 Red Hat
返回顶部