此内容没有您所选择的语言版本。
Chapter 2. Designing your separate heat stacks deployment
To segment your deployment within separate heat stacks, you must first deploy a single overcloud with the control plane. You can then create separate stacks for the distributed compute node (DCN) sites. The following example shows separate stacks for different node types:
-
Controller nodes: A separate heat stack named
central, for example, deploys the controllers. When you create new heat stacks for the DCN sites, you must create them with data from thecentralstack. The Controller nodes must be available for any instance management tasks. -
DCN sites: You can have a separate, uniquely named heat stacks, such as
dcn0,dcn1, and so on. Use a DHCP relay to extend the provisioning network to the remote site.
To make management simpler, create a separate availability zone (AZ) for each stack.
If you use spine/leaf networking, you must use a specific format to define the Storage and StorageMgmt networks. Define the Storage and StorageMgmt networks as override values and enclose the values in single quotes. In the following example the storage network (referred to as the public_network) spans two subnets, is separated by a comma, and is enclosed in single quotes:
CephAnsibleExtraConfig: public_network: '172.23.1.0/24,172.23.2.0/24'
CephAnsibleExtraConfig:
public_network: '172.23.1.0/24,172.23.2.0/24'
2.1. Reusing network resources in multiple stacks 复制链接链接已复制到粘贴板!
You can configure multiple stacks to use the same network resources, such as VIPs and subnets. You can duplicate network resources between stacks by using either the ManageNetworks setting or the external_resource_* fields.
Do not use the ManageNetworks setting if you are using the external_resource_* fields.
If you are not reusing networks between stacks, each network that is defined in network_data.yaml must have a unique name across all deployed stacks. For example, the network name internal_api cannot be reused between stacks, unless you intend to share the network between the stacks. Give the network a different name and name_lower property, such as InternalApiCompute0 and internal_api_compute_0.
2.1.1. Using ManageNetworks to reuse network resources 复制链接链接已复制到粘贴板!
With the ManageNetworks setting, multiple stacks can use the same network_data.yaml file and the setting is applied globally to all network resources. The network_data.yaml file defines the network resources that the stack uses:
Use the following sequence so that the new stack does not manage the existing network resources.
Procedure
-
Deploy the central stack with
ManageNetworks: trueor leave unset. - Deploy the additional stack.
When you add new network resources, for example when you add new leaves in a spine/leaf deployment, you must update the central stack with the new network_data.yaml. This is because the central stack still owns and manages the network resources. After the network resources are available in the central stack, you can deploy the additional stack to use them.
2.1.2. Using UUIDs to reuse network resources 复制链接链接已复制到粘贴板!
If you need more control over which networks are reused between stacks, you can use the external_resource_* field for resources in the network_data.yaml file, including networks, subnets, segments, or VIPs. These resources are marked as being externally managed, and heat does not perform any create, update, or delete operations on them.
Add an entry for each required network definition in the network_data.yaml file. The resource is then available for deployment on the separate stack:
external_resource_network_id: Existing Network UUID external_resource_subnet_id: Existing Subnet UUID external_resource_segment_id: Existing Segment UUID external_resource_vip_id: Existing VIP UUID
external_resource_network_id: Existing Network UUID
external_resource_subnet_id: Existing Subnet UUID
external_resource_segment_id: Existing Segment UUID
external_resource_vip_id: Existing VIP UUID
This example reuses the internal_api network from the control plane stack in a separate stack.
Procedure
Identify the UUIDs of the related network resources:
openstack network show internal_api -c id -f value openstack subnet show internal_api_subnet -c id -f value openstack port show internal_api_virtual_ip -c id -f value
$ openstack network show internal_api -c id -f value $ openstack subnet show internal_api_subnet -c id -f value $ openstack port show internal_api_virtual_ip -c id -f valueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Save the values that are shown in the output of the above commands and add them to the network definition for the
internal_apinetwork in thenetwork_data.yamlfile for the separate stack:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.2. Service placement 复制链接链接已复制到粘贴板!
In this configuration, each distributed compute node (DCN) site is deployed within its own availability zone (AZ) for Compute and Block Storage (cinder):
-
Cinder: Each DCN site uses a Block Storage AZ to run the
cinder-volumeservice. Thecinder-volumeservice is expected to support active/active configuration in a future update. -
Glance: The Image service (glance) uses the Object Storage (swift) back end at the central site. Any Compute instances that are created in a DCN site AZ use
HTTP GETto retrieve the image from the central site. In a future release, the Image service will use the Ceph RBD back end at the central site and at DCN sites. Images can then be transported from the central site to the DCN sites, which means that they can be COW-booted at the DCN location. - Ceph: In this architecture, Ceph does not run at the central site. Instead, each DCN site runs its own Ceph cluster that is colocated with the Compute nodes using HCI. The Ceph back end is only used for Block Storage volumes.
2.3. Managing separate heat stacks 复制链接链接已复制到粘贴板!
The procedures in this guide show how to deploy three heat stacks: central, dcn0, and dcn1. Red Hat recommends that you store the templates for each heat stack in a separate directory to keep the information about each deployment isolated.
Procedure
Define the
centralheat stack:mkdir central touch central/overrides.yaml
$ mkdir central $ touch central/overrides.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Extract data from the
centralheat stack into a common directory for all DCN sites:mkdir dcn-common touch dcn-common/overrides.yaml touch dcn-common/control-plane-export.yaml
$ mkdir dcn-common $ touch dcn-common/overrides.yaml $ touch dcn-common/control-plane-export.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
control-plane-export.yamlfile is created later by theopenstack overcloud exportcommand. It is in thedcn-commondirectory because all DCN deployments in this guide must use this file.Define the
dcn0site.mkdir dcn0 touch dcn0/overrides.yaml
$ mkdir dcn0 $ touch dcn0/overrides.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
To deploy more DCN sites, create additional dcn directories by number.