Search

Chapter 2. Designing your separate heat stacks deployment

download PDF

To segment your deployment within separate heat stacks, you must first deploy a single overcloud with the control plane. You can then create separate stacks for the distributed compute node (DCN) sites. The following example shows separate stacks for different node types:

  • Controller nodes: A separate heat stack named central, for example, deploys the controllers. When you create new heat stacks for the DCN sites, you must create them with data from the central stack. The Controller nodes must be available for any instance management tasks.
  • DCN sites: You can have a separate, uniquely named heat stacks, such as dcn0, dcn1, and so on. Use a DHCP relay to extend the provisioning network to the remote site.
Note

To make management simpler, create a separate availability zone (AZ) for each stack.

Note

If you use spine/leaf networking, you must use a specific format to define the Storage and StorageMgmt networks. Define the Storage and StorageMgmt networks as override values and enclose the values in single quotes. In the following example the storage network (referred to as the public_network) spans two subnets, is separated by a comma, and is enclosed in single quotes:

CephAnsibleExtraConfig:
  public_network: '172.23.1.0/24,172.23.2.0/24'

2.1. Reusing network resources in multiple stacks

You can configure multiple stacks to use the same network resources, such as VIPs and subnets. You can duplicate network resources between stacks by using either the ManageNetworks setting or the external_resource_* fields.

Note

Do not use the ManageNetworks setting if you are using the external_resource_* fields.

If you are not reusing networks between stacks, each network that is defined in network_data.yaml must have a unique name across all deployed stacks. For example, the network name internal_api cannot be reused between stacks, unless you intend to share the network between the stacks. Give the network a different name and name_lower property, such as InternalApiCompute0 and internal_api_compute_0.

2.1.1. Using ManageNetworks to reuse network resources

With the ManageNetworks setting, multiple stacks can use the same network_data.yaml file and the setting is applied globally to all network resources. The network_data.yaml file defines the network resources that the stack uses:

- name: StorageBackup
  vip: true
  name_lower: storage_backup
  ip_subnet: '172.21.1.0/24'
  allocation_pools: [{'start': '171.21.1.4', 'end': '172.21.1.250'}]
  gateway_ip: '172.21.1.1'

Use the following sequence so that the new stack does not manage the existing network resources.

Procedure

  1. Deploy the central stack with ManageNetworks: true or leave unset.
  2. Deploy the additional stack.

When you add new network resources, for example when you add new leaves in a spine/leaf deployment, you must update the central stack with the new network_data.yaml. This is because the central stack still owns and manages the network resources. After the network resources are available in the central stack, you can deploy the additional stack to use them.

2.1.2. Using UUIDs to reuse network resources

If you need more control over which networks are reused between stacks, you can use the external_resource_* field for resources in the network_data.yaml file, including networks, subnets, segments, or VIPs. These resources are marked as being externally managed, and heat does not perform any create, update, or delete operations on them.

Add an entry for each required network definition in the network_data.yaml file. The resource is then available for deployment on the separate stack:

external_resource_network_id: Existing Network UUID
external_resource_subnet_id: Existing Subnet UUID
external_resource_segment_id: Existing Segment UUID
external_resource_vip_id: Existing VIP UUID

This example reuses the internal_api network from the control plane stack in a separate stack.

Procedure

  1. Identify the UUIDs of the related network resources:

    $ openstack network show internal_api -c id -f value
    $ openstack subnet show internal_api_subnet -c id -f value
    $ openstack port show internal_api_virtual_ip -c id -f value
  2. Save the values that are shown in the output of the above commands and add them to the network definition for the internal_api network in the network_data.yaml file for the separate stack:

    - name: InternalApi
      external_resource_network_id: 93861871-7814-4dbc-9e6c-7f51496b43af
      external_resource_subnet_id: c85c8670-51c1-4b17-a580-1cfb4344de27
      external_resource_vip_id: 8bb9d96f-72bf-4964-a05c-5d3fed203eb7
      name_lower: internal_api
      vip: true
      ip_subnet: '172.16.2.0/24'
      allocation_pools: [{'start': '172.16.2.4', 'end': '172.16.2.250'}]
      ipv6_subnet: 'fd00:fd00:fd00:2000::/64'
      ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:2000::10', 'end': 'fd00:fd00:fd00:2000:ffff:ffff:ffff:fffe'}]
      mtu: 1400

2.2. Service placement

In this configuration, each distributed compute node (DCN) site is deployed within its own availability zone (AZ) for Compute and Block Storage (cinder):

  • Cinder: Each DCN site uses a Block Storage AZ to run the cinder-volume service. The cinder-volume service is expected to support active/active configuration in a future update.
  • Glance: The Image service (glance) uses the Object Storage (swift) back end at the central site. Any Compute instances that are created in a DCN site AZ use HTTP GET to retrieve the image from the central site. In a future release, the Image service will use the Ceph RBD back end at the central site and at DCN sites. Images can then be transported from the central site to the DCN sites, which means that they can be COW-booted at the DCN location.
  • Ceph: In this architecture, Ceph does not run at the central site. Instead, each DCN site runs its own Ceph cluster that is colocated with the Compute nodes using HCI. The Ceph back end is only used for Block Storage volumes.

2.3. Managing separate heat stacks

The procedures in this guide show how to deploy three heat stacks: central, dcn0, and dcn1. Red Hat recommends that you store the templates for each heat stack in a separate directory to keep the information about each deployment isolated.

Procedure

  1. Define the central heat stack:

    $ mkdir central
    $ touch central/overrides.yaml
  2. Extract data from the central heat stack into a common directory for all DCN sites:

    $ mkdir dcn-common
    $ touch dcn-common/overrides.yaml
    $ touch dcn-common/control-plane-export.yaml

    The control-plane-export.yaml file is created later by the openstack overcloud export command. It is in the dcn-common directory because all DCN deployments in this guide must use this file.

  3. Define the dcn0 site.

    $ mkdir dcn0
    $ touch dcn0/overrides.yaml
Note

To deploy more DCN sites, create additional dcn directories by number.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.