Chapter 5. Separation of control plane and data plane


The Distributed Inference with llm-d architecture separates the model serving control plane from the inference data plane.

KServe manages model lifecycle, scaling, and API exposure. The llm-d inference scheduler handles runtime-aware scheduling, cache locality optimization, and intelligent request distribution across pods and nodes. This separation enables platform teams to swap runtimes or schedulers independently and integrate future innovations without redesigning the stack.

You deploy the inference stack by using Helm charts distributed as OCI container images, making it compatible with Openshift Container Platform 4.19 or later and any Cloud Native Computing Foundation (CNCF) certified managed Kubernetes 1.33 or later cluster. On Openshift Container Platform, the chart integrates with Operator Lifecycle Manager (OLM) to install required Operators automatically. On managed Kubernetes, the chart installs all dependencies directly.

Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat Documentation

Legal Notice

Theme

© 2026 Red Hat
Back to top