このコンテンツは選択した言語では利用できません。
Chapter 5. Separation of control plane and data plane
The Distributed Inference with llm-d architecture separates the model serving control plane from the inference data plane.
KServe manages model lifecycle, scaling, and API exposure. The llm-d inference scheduler handles runtime-aware scheduling, cache locality optimization, and intelligent request distribution across pods and nodes. This separation enables platform teams to swap runtimes or schedulers independently and integrate future innovations without redesigning the stack.
You deploy the inference stack by using Helm charts distributed as OCI container images, making it compatible with Openshift Container Platform 4.19 or later and any Cloud Native Computing Foundation (CNCF) certified managed Kubernetes 1.33 or later cluster. On Openshift Container Platform, the chart integrates with Operator Lifecycle Manager (OLM) to install required Operators automatically. On managed Kubernetes, the chart installs all dependencies directly.