Chapter 1. Understanding DCN
Distributed compute node (DCN) architecture is for edge use cases allowing remote compute and storage nodes to be deployed remotely while sharing a common centralised control plane. DCN architecture allows you to position workloads strategically closer to your operational needs for higher performance.
The central location can consist of any role, however at a minimum, requires three controllers. Compute nodes can exist at the edge, as well as at the central location.
DCN architecture is a hub and spoke routed network deployment. DCN is comparable to a spine and leaf deployment for routed provisioning and control plane networking with Red Hat OpenStack Platform director.
- The hub is the central site with core routers and a datacenter gateway (DC-GW).
- The spoke is the remote edge, or leaf.
Edge locations do not have controllers, making them architecturally different from traditional deployments of Red Hat OpenStack Platform:
- Control plane services run remotely, at the central location.
- Pacemaker is not installed.
- The Block Storage service (cinder) runs in active/active mode.
- Etcd is deployed as a distributed lock manager (DLM).
1.1. Required software for distributed compute node architecture
The following table shows the software and minimum versions required to deploy Red Hat OpenStack Platform in a distributed compute node (DCN) architecture:
Platform | Version | Optional |
---|---|---|
Red Hat Enterprise Linux | 9 | No |
Red Hat OpenStack Platform | 17.0 | No |
Red Hat Ceph Storage | 5 | Yes |
1.2. Multistack design
When you deploy Red Hat OpenStack Platform (RHOSP) with a DCN design, you use Red Hat director’s capabilities for multiple stack deployment and management to deploy each site as a distinct stack.
Managing a DCN architecture as a single stack is unsupported, unless the deployment is an upgrade from Red Hat OpenStack Platform 13. There are no supported methods to split an existing stack, however you can add stacks to a pre-existing deployment. For more information, see Section A.3, “Migrating to a multistack deployment”.
The central location is a traditional stack deployment of RHOSP, however you are not required to deploy Compute nodes or Red Hat Ceph storage with the central stack.
With DCN, you deploy each location as a distinct availability zone (AZ).
1.3. DCN storage
You can deploy each edge site, either without storage, or with Ceph on hyperconverged nodes. The storage you deploy is dedicated to the site you deploy it on.
DCN architecture uses Glance multistore. For edge sites deployed without storage, additional tooling is available so that you can cache and store images in the Compute service (nova) cache. Caching glance images in nova provides the faster boot times for instances by avoiding the process of downloading images across a WAN link. For more information, see Chapter 10, Precaching glance images into nova.
1.4. DCN edge
With Distributed Compute Node architecture, you deploy the control nodes at the central site, and use these controllers to manage geographically dispersed edge sites. When you deploy an edge site, you deploy only compute nodes, which makes edge sites architecturally different from traditional deployments of Red Hat OpenStack Platform. When you launch an instance at an edge site, the required image is copied to the local Image service (glance) store automatically. You can copy images from the central Image store to edge sites by using glance multistore to save time during instance launch. For more information, see Image service with multiple stores.
At edge sites:
- Control plane services run remotely at the central location.
- Pacemaker does not run at DCN sites.
- The Block Storage service (cinder) runs in active/active mode.
- Etcd is deployed as a distributed lock manager (DLM).