Chapter 4. Preparing overcloud templates for DCN deployment
4.1. Prerequisites for using separate heat stacks
Your environment must meet the following prerequisites before you create a deployment using separate heat stacks:
- A working Red Hat OpenStack Platform 16 undercloud.
- For Ceph Storage users: access to Red Hat Ceph Storage 4.
- For the central location: three nodes that are capable of serving as central Controller nodes. All three Controller nodes must be in the same heat stack. You cannot split Controller nodes, or any of the control plane services, across separate heat stacks.
- Ceph storage is a requirement at the central location if you plan to deploy Ceph storage at the edge.
- For each additional DCN site: three HCI compute nodes.
- All nodes must be pre-provisioned or able to PXE boot from the central deployment network. You can use a DHCP relay to enable this connectivity for DCNs.
- All nodes have been introspected by ironic.
-
Red Hat recommends leaving the <role>HostnameFormat parameter as the default value: %stackname%-<role>-%index%. If you do not include the %stackname% prefix, your overcloud uses the same hostnames for distributed compute nodes in different stacks. Ensure that your distributed compute nodes use the %stackname% prefix to distinguish nodes from different edge sites. For example, if you deploy two edge sites named
dcn0
anddcn1
, the stack name prefix helps you to distinguish between dcn0-distributedcompute-0 and dcn1-distributedcompute-0 when you run theopenstack server list
command on the undercloud. -
Source the
centralrc
authentication file to schedule workloads at edge sites as well as at the central location. You do not require authentication files that are automatically generated for edge sites.
4.2. Limitations of the example separate heat stacks deployment
This document provides an example deployment that uses separate heat stacks on Red Hat OpenStack Platform. This example environment has the following limitations:
- Spine/Leaf networking - The example in this guide does not demonstrate routing requirements, which are required in distributed compute node (DCN) deployments.
- Ironic DHCP Relay - This guide does not include how to configure Ironic with a DHCP relay.
4.3. Designing your separate heat stacks deployment
To segment your deployment within separate heat stacks, you must first deploy a single overcloud with the control plane. You can then create separate stacks for the distributed compute node (DCN) sites. The following example shows separate stacks for different node types:
-
Controller nodes: A separate heat stack named
central
, for example, deploys the controllers. When you create new heat stacks for the DCN sites, you must create them with data from thecentral
stack. The Controller nodes must be available for any instance management tasks. -
DCN sites: You can have separate, uniquely named heat stacks, such as
dcn0
,dcn1
, and so on. Use a DHCP relay to extend the provisioning network to the remote site.
You must create a separate availability zone (AZ) for each stack.
If you use spine/leaf networking, you must use a specific format to define the Storage
and StorageMgmt
networks so that ceph-ansible correctly configures Ceph to use those networks. Define the Storage
and StorageMgmt
networks as override values and enclose the values in single quotes. In the following example the storage network (referred to as the public_network
) spans two subnets, is separated by a comma, and is enclosed in single quotes:
CephAnsibleExtraConfig: public_network: '172.23.1.0/24,172.23.2.0/24'
4.4. Reusing network resources in multiple stacks
You can configure multiple stacks to use the same network resources, such as VIPs and subnets. You can duplicate network resources between stacks by using either the ManageNetworks
setting or the external_resource_*
fields.
Do not use the ManageNetworks
setting if you are using the external_resource_*
fields.
If you are not reusing networks between stacks, each network that is defined in network_data.yaml
must have a unique name across all deployed stacks. For example, the network name internal_api
cannot be reused between stacks, unless you intend to share the network between the stacks. Give the network a different name and name_lower
property, such as InternalApiCompute0
and internal_api_compute_0
.
4.5. Using ManageNetworks to reuse network resources
With the ManageNetworks
setting, multiple stacks can use the same network_data.yaml
file and the setting is applied globally to all network resources. The network_data.yaml
file defines the network resources that the stack uses:
- name: StorageBackup vip: true name_lower: storage_backup ip_subnet: '172.21.1.0/24' allocation_pools: [{'start': '171.21.1.4', 'end': '172.21.1.250'}] gateway_ip: '172.21.1.1'
When you set ManageNetworks to false, the nodes will use the existing networks that were already created in the central
stack.
Use the following sequence so that the new stack does not manage the existing network resources.
Procedure
-
Deploy the central stack with
ManageNetworks: true
or leave unset. -
Deploy the additional stack with
ManageNetworks: false
.
When you add new network resources, for example when you add new leaves in a spine/leaf deployment, you must update the central stack with the new network_data.yaml
. This is because the central stack still owns and manages the network resources. After the network resources are available in the central stack, you can deploy the additional stack to use them.
4.6. Using UUIDs to reuse network resources
If you need more control over which networks are reused between stacks, you can use the external_resource_*
field for resources in the network_data.yaml
file, including networks, subnets, segments, or VIPs. These resources are marked as being externally managed, and heat does not perform any create, update, or delete operations on them.
Add an entry for each required network definition in the network_data.yaml
file. The resource is then available for deployment on the separate stack:
external_resource_network_id: Existing Network UUID external_resource_subnet_id: Existing Subnet UUID external_resource_segment_id: Existing Segment UUID external_resource_vip_id: Existing VIP UUID
This example reuses the internal_api
network from the control plane stack in a separate stack.
Procedure
Identify the UUIDs of the related network resources:
$ openstack network show internal_api -c id -f value $ openstack subnet show internal_api_subnet -c id -f value $ openstack port show internal_api_virtual_ip -c id -f value
Save the values that are shown in the output of the above commands and add them to the network definition for the
internal_api
network in thenetwork_data.yaml
file for the separate stack:- name: InternalApi external_resource_network_id: 93861871-7814-4dbc-9e6c-7f51496b43af external_resource_subnet_id: c85c8670-51c1-4b17-a580-1cfb4344de27 external_resource_vip_id: 8bb9d96f-72bf-4964-a05c-5d3fed203eb7 name_lower: internal_api vip: true ip_subnet: '172.16.2.0/24' allocation_pools: [{'start': '172.16.2.4', 'end': '172.16.2.250'}] ipv6_subnet: 'fd00:fd00:fd00:2000::/64' ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:2000::10', 'end': 'fd00:fd00:fd00:2000:ffff:ffff:ffff:fffe'}] mtu: 1400
4.7. Managing separate heat stacks
The procedures in this guide show how to organize the environment files for three heat stacks: central
, dcn0
, and dcn1
. Red Hat recommends that you store the templates for each heat stack in a separate directory to keep the information about each deployment isolated.
Procedure
Define the
central
heat stack:$ mkdir central $ touch central/overrides.yaml
Extract data from the
central
heat stack into a common directory for all DCN sites:$ mkdir dcn-common $ touch dcn-common/overrides.yaml $ touch dcn-common/central-export.yaml
The
central-export.yaml
file is created later by theopenstack overcloud export
command. It is in thedcn-common
directory because all DCN deployments in this guide must use this file.Define the
dcn0
site.$ mkdir dcn0 $ touch dcn0/overrides.yaml
To deploy more DCN sites, create additional dcn
directories by number.
The touch is used to provide an example of file organization. Each file must contain the appropriate content for successful deployments.
4.8. Retrieving the container images
Use the following procedure, and its example file contents, to retrieve the container images you need for deployments with separate heat stacks. You must ensure the container images for optional or edge-specific services are included by running the openstack container image prepare
command with edge site’s environment files.
For more information, see Preparing container images.
Procedure
Add your Registry Service Account credentials to
containers.yaml
.parameter_defaults: ContainerImagePrepare: - push_destination: true set: ceph_namespace: registry.redhat.io/rhceph ceph_image: rhceph-4-rhel8 ceph_tag: latest name_prefix: openstack- namespace: registry.redhat.io/rhosp16-rhel8 tag: latest ContainerImageRegistryCredentials: # https://access.redhat.com/RegistryAuthentication registry.redhat.io: registry-service-account-username: registry-service-account-password
Generate the environment file as
images-env.yaml
:sudo openstack tripleo container image prepare \ -e containers.yaml \ --output-env-file images-env.yaml
The resulting
images-env.yaml
file is included as part of the overcloud deployment procedure for the stack for which it is generated.
4.9. Creating fast datapath roles for the edge
To use fast datapath services at the edge, you must create a custom role that defines both fast datapath and edge services. When you create the roles file for deployment, you can include the newly created role that defines services needed for both distributed compute node architecture and fast datapath services such as DPDK or SR-IOV.
For example, create a custom role for distributedCompute with DPDK:
Prerequisites
A successful undercloud installation. For more information, see Installing the undercloud.
Procedure
-
Log in to the undercloud host as the
stack
user. Copy the default
roles
directory:cp -r /usr/share/openstack-tripleo-heat-templates/roles ~/.
Create a new file named
DistributedComputeDpdk.yaml
from theDistributedCompute.yaml
file:cp roles/DistributedCompute.yaml roles/DistributedComputeDpdk.yaml
Add DPDK services to the new
DistributedComputeDpdk.yaml
file. You can identify the parameters that you need to add by identifying the parameters in theComputeOvsDpdk.yaml
file that are not present in theDistributedComputeDpdk.yaml
file.diff -u roles/DistributedComputeDpdk.yaml roles/ComputeOvsDpdk.yaml
In the output, the parameters that are preceded by
+
are present in the ComputeOvsDpdk.yaml file but are not present in the DistributedComputeDpdk.yaml file. Include these parameters in the newDistributedComputeDpdk.yaml
file.Use the
DistributedComputeDpdk.yaml
to create aDistributedComputeDpdk
roles file :openstack overcloud roles generate --roles-path ~/roles/ -o ~/roles/roles-custom.yaml DistributedComputeDpdk
You can use this same method to create fast datapath roles for SR-IOV, or a combination of SR-IOV and DPDK for the edge to meet your requirements.
Additional Resources
If you are planning to deploy edge sites without block storage, see the following:
If you are planning to deploy edge sites with Ceph storage, see the following: