此内容没有您所选择的语言版本。

Chapter 5. Separation of control plane and data plane


The Distributed Inference with llm-d architecture separates the model serving control plane from the inference data plane.

KServe manages model lifecycle, scaling, and API exposure. The llm-d inference scheduler handles runtime-aware scheduling, cache locality optimization, and intelligent request distribution across pods and nodes. This separation enables platform teams to swap runtimes or schedulers independently and integrate future innovations without redesigning the stack.

You deploy the inference stack by using Helm charts distributed as OCI container images, making it compatible with Openshift Container Platform 4.19 or later and any Cloud Native Computing Foundation (CNCF) certified managed Kubernetes 1.33 or later cluster. On Openshift Container Platform, the chart integrates with Operator Lifecycle Manager (OLM) to install required Operators automatically. On managed Kubernetes, the chart installs all dependencies directly.

Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

关于红帽文档

Legal Notice

Theme

© 2026 Red Hat
返回顶部