Chapter 2. Designing your separate heat stacks deployment
To segment your deployment within separate heat stacks, you must first deploy a single overcloud with the control plane. You can then create separate stacks for the distributed compute node (DCN) sites. The following example shows separate stacks for different node types:
-
Controller nodes: A separate heat stack named
central
, for example, deploys the controllers. When you create new heat stacks for the DCN sites, you must create them with data from thecentral
stack. The Controller nodes must be available for any instance management tasks. -
DCN sites: You can have a separate, uniquely named heat stacks, such as
dcn0
,dcn1
, and so on. Use a DHCP relay to extend the provisioning network to the remote site.
To make management simpler, create a separate availability zone (AZ) for each stack.
If you use spine/leaf networking, you must use a specific format to define the Storage
and StorageMgmt
networks. Define the Storage
and StorageMgmt
networks as override values and enclose the values in single quotes. In the following example the storage network (referred to as the public_network
) spans two subnets, is separated by a comma, and is enclosed in single quotes:
CephAnsibleExtraConfig: public_network: '172.23.1.0/24,172.23.2.0/24'
2.1. Reusing network resources in multiple stacks
You can configure multiple stacks to use the same network resources, such as VIPs and subnets. You can duplicate network resources between stacks by using either the ManageNetworks
setting or the external_resource_*
fields.
Do not use the ManageNetworks
setting if you are using the external_resource_*
fields.
If you are not reusing networks between stacks, each network that is defined in network_data.yaml
must have a unique name across all deployed stacks. For example, the network name internal_api
cannot be reused between stacks, unless you intend to share the network between the stacks. Give the network a different name and name_lower
property, such as InternalApiCompute0
and internal_api_compute_0
.
2.1.1. Using ManageNetworks to reuse network resources
With the ManageNetworks
setting, multiple stacks can use the same network_data.yaml
file and the setting is applied globally to all network resources. The network_data.yaml
file defines the network resources that the stack uses:
- name: StorageBackup vip: true name_lower: storage_backup ip_subnet: '172.21.1.0/24' allocation_pools: [{'start': '171.21.1.4', 'end': '172.21.1.250'}] gateway_ip: '172.21.1.1'
Use the following sequence so that the new stack does not manage the existing network resources.
Procedure
-
Deploy the central stack with
ManageNetworks: true
or leave unset. - Deploy the additional stack.
When you add new network resources, for example when you add new leaves in a spine/leaf deployment, you must update the central stack with the new network_data.yaml
. This is because the central stack still owns and manages the network resources. After the network resources are available in the central stack, you can deploy the additional stack to use them.
2.1.2. Using UUIDs to reuse network resources
If you need more control over which networks are reused between stacks, you can use the external_resource_*
field for resources in the network_data.yaml
file, including networks, subnets, segments, or VIPs. These resources are marked as being externally managed, and heat does not perform any create, update, or delete operations on them.
Add an entry for each required network definition in the network_data.yaml
file. The resource is then available for deployment on the separate stack:
external_resource_network_id: Existing Network UUID external_resource_subnet_id: Existing Subnet UUID external_resource_segment_id: Existing Segment UUID external_resource_vip_id: Existing VIP UUID
This example reuses the internal_api
network from the control plane stack in a separate stack.
Procedure
Identify the UUIDs of the related network resources:
$ openstack network show internal_api -c id -f value $ openstack subnet show internal_api_subnet -c id -f value $ openstack port show internal_api_virtual_ip -c id -f value
Save the values that are shown in the output of the above commands and add them to the network definition for the
internal_api
network in thenetwork_data.yaml
file for the separate stack:- name: InternalApi external_resource_network_id: 93861871-7814-4dbc-9e6c-7f51496b43af external_resource_subnet_id: c85c8670-51c1-4b17-a580-1cfb4344de27 external_resource_vip_id: 8bb9d96f-72bf-4964-a05c-5d3fed203eb7 name_lower: internal_api vip: true ip_subnet: '172.16.2.0/24' allocation_pools: [{'start': '172.16.2.4', 'end': '172.16.2.250'}] ipv6_subnet: 'fd00:fd00:fd00:2000::/64' ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:2000::10', 'end': 'fd00:fd00:fd00:2000:ffff:ffff:ffff:fffe'}] mtu: 1400
2.2. Service placement
In this configuration, each distributed compute node (DCN) site is deployed within its own availability zone (AZ) for Compute and Block Storage (cinder):
-
Cinder: Each DCN site uses a Block Storage AZ to run the
cinder-volume
service. Thecinder-volume
service is expected to support active/active configuration in a future update. -
Glance: The Image service (glance) uses the Object Storage (swift) back end at the central site. Any Compute instances that are created in a DCN site AZ use
HTTP GET
to retrieve the image from the central site. In a future release, the Image service will use the Ceph RBD back end at the central site and at DCN sites. Images can then be transported from the central site to the DCN sites, which means that they can be COW-booted at the DCN location. - Ceph: In this architecture, Ceph does not run at the central site. Instead, each DCN site runs its own Ceph cluster that is colocated with the Compute nodes using HCI. The Ceph back end is only used for Block Storage volumes.
2.3. Managing separate heat stacks
The procedures in this guide show how to deploy three heat stacks: central
, dcn0
, and dcn1
. Red Hat recommends that you store the templates for each heat stack in a separate directory to keep the information about each deployment isolated.
Procedure
Define the
central
heat stack:$ mkdir central $ touch central/overrides.yaml
Extract data from the
central
heat stack into a common directory for all DCN sites:$ mkdir dcn-common $ touch dcn-common/overrides.yaml $ touch dcn-common/control-plane-export.yaml
The
control-plane-export.yaml
file is created later by theopenstack overcloud export
command. It is in thedcn-common
directory because all DCN deployments in this guide must use this file.Define the
dcn0
site.$ mkdir dcn0 $ touch dcn0/overrides.yaml
To deploy more DCN sites, create additional dcn
directories by number.