Este contenido no está disponible en el idioma seleccionado.

Chapter 4. Preparing overcloud templates for DCN deployment


4.1. Prerequisites for using separate heat stacks

Your environment must meet the following prerequisites before you create a deployment using separate heat stacks:

  • A working Red Hat OpenStack Platform 16 undercloud.
  • For Ceph Storage users: access to Red Hat Ceph Storage 4.
  • For the central location: three nodes that are capable of serving as central Controller nodes. All three Controller nodes must be in the same heat stack. You cannot split Controller nodes, or any of the control plane services, across separate heat stacks.
  • Ceph storage is a requirement at the central location if you plan to deploy Ceph storage at the edge.
  • For each additional DCN site: three HCI compute nodes.
  • All nodes must be pre-provisioned or able to PXE boot from the central deployment network. You can use a DHCP relay to enable this connectivity for DCNs.
  • All nodes have been introspected by ironic.
  • Red Hat recommends leaving the <role>HostnameFormat parameter as the default value: %stackname%-<role>-%index%. If you do not include the %stackname% prefix, your overcloud uses the same hostnames for distributed compute nodes in different stacks. Ensure that your distributed compute nodes use the %stackname% prefix to distinguish nodes from different edge sites. For example, if you deploy two edge sites named dcn0 and dcn1, the stack name prefix helps you to distinguish between dcn0-distributedcompute-0 and dcn1-distributedcompute-0 when you run the openstack server list command on the undercloud.
  • Source the centralrc authentication file to schedule workloads at edge sites as well as at the central location. You do not require authentication files that are automatically generated for edge sites.

4.2. Limitations of the example separate heat stacks deployment

This document provides an example deployment that uses separate heat stacks on Red Hat OpenStack Platform. This example environment has the following limitations:

  • Spine/Leaf networking - The example in this guide does not demonstrate routing requirements, which are required in distributed compute node (DCN) deployments.
  • Ironic DHCP Relay - This guide does not include how to configure Ironic with a DHCP relay.

4.3. Designing your separate heat stacks deployment

To segment your deployment within separate heat stacks, you must first deploy a single overcloud with the control plane. You can then create separate stacks for the distributed compute node (DCN) sites. The following example shows separate stacks for different node types:

  • Controller nodes: A separate heat stack named central, for example, deploys the controllers. When you create new heat stacks for the DCN sites, you must create them with data from the central stack. The Controller nodes must be available for any instance management tasks.
  • DCN sites: You can have separate, uniquely named heat stacks, such as dcn0, dcn1, and so on. Use a DHCP relay to extend the provisioning network to the remote site.
Note

You must create a separate availability zone (AZ) for each stack.

Note

If you use spine/leaf networking, you must use a specific format to define the Storage and StorageMgmt networks so that ceph-ansible correctly configures Ceph to use those networks. Define the Storage and StorageMgmt networks as override values and enclose the values in single quotes. In the following example the storage network (referred to as the public_network) spans two subnets, is separated by a comma, and is enclosed in single quotes:

CephAnsibleExtraConfig:
  public_network: '172.23.1.0/24,172.23.2.0/24'
Copy to Clipboard Toggle word wrap

4.4. Reusing network resources in multiple stacks

You can configure multiple stacks to use the same network resources, such as VIPs and subnets. You can duplicate network resources between stacks by using either the ManageNetworks setting or the external_resource_* fields.

Note

Do not use the ManageNetworks setting if you are using the external_resource_* fields.

If you are not reusing networks between stacks, each network that is defined in network_data.yaml must have a unique name across all deployed stacks. For example, the network name internal_api cannot be reused between stacks, unless you intend to share the network between the stacks. Give the network a different name and name_lower property, such as InternalApiCompute0 and internal_api_compute_0.

4.5. Using ManageNetworks to reuse network resources

With the ManageNetworks setting, multiple stacks can use the same network_data.yaml file and the setting is applied globally to all network resources. The network_data.yaml file defines the network resources that the stack uses:

- name: StorageBackup
  vip: true
  name_lower: storage_backup
  ip_subnet: '172.21.1.0/24'
  allocation_pools: [{'start': '171.21.1.4', 'end': '172.21.1.250'}]
  gateway_ip: '172.21.1.1'
Copy to Clipboard Toggle word wrap

When you set ManageNetworks to false, the nodes will use the existing networks that were already created in the central stack.

Use the following sequence so that the new stack does not manage the existing network resources.

Procedure

  1. Deploy the central stack with ManageNetworks: true or leave unset.
  2. Deploy the additional stack with ManageNetworks: false.

When you add new network resources, for example when you add new leaves in a spine/leaf deployment, you must update the central stack with the new network_data.yaml. This is because the central stack still owns and manages the network resources. After the network resources are available in the central stack, you can deploy the additional stack to use them.

4.6. Using UUIDs to reuse network resources

If you need more control over which networks are reused between stacks, you can use the external_resource_* field for resources in the network_data.yaml file, including networks, subnets, segments, or VIPs. These resources are marked as being externally managed, and heat does not perform any create, update, or delete operations on them.

Add an entry for each required network definition in the network_data.yaml file. The resource is then available for deployment on the separate stack:

external_resource_network_id: Existing Network UUID
external_resource_subnet_id: Existing Subnet UUID
external_resource_segment_id: Existing Segment UUID
external_resource_vip_id: Existing VIP UUID
Copy to Clipboard Toggle word wrap

This example reuses the internal_api network from the control plane stack in a separate stack.

Procedure

  1. Identify the UUIDs of the related network resources:

    $ openstack network show internal_api -c id -f value
    $ openstack subnet show internal_api_subnet -c id -f value
    $ openstack port show internal_api_virtual_ip -c id -f value
    Copy to Clipboard Toggle word wrap
  2. Save the values that are shown in the output of the above commands and add them to the network definition for the internal_api network in the network_data.yaml file for the separate stack:

    - name: InternalApi
      external_resource_network_id: 93861871-7814-4dbc-9e6c-7f51496b43af
      external_resource_subnet_id: c85c8670-51c1-4b17-a580-1cfb4344de27
      external_resource_vip_id: 8bb9d96f-72bf-4964-a05c-5d3fed203eb7
      name_lower: internal_api
      vip: true
      ip_subnet: '172.16.2.0/24'
      allocation_pools: [{'start': '172.16.2.4', 'end': '172.16.2.250'}]
      ipv6_subnet: 'fd00:fd00:fd00:2000::/64'
      ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:2000::10', 'end': 'fd00:fd00:fd00:2000:ffff:ffff:ffff:fffe'}]
      mtu: 1400
    Copy to Clipboard Toggle word wrap

4.7. Managing separate heat stacks

The procedures in this guide show how to organize the environment files for three heat stacks: central, dcn0, and dcn1. Red Hat recommends that you store the templates for each heat stack in a separate directory to keep the information about each deployment isolated.

Procedure

  1. Define the central heat stack:

    $ mkdir central
    $ touch central/overrides.yaml
    Copy to Clipboard Toggle word wrap
  2. Extract data from the central heat stack into a common directory for all DCN sites:

    $ mkdir dcn-common
    $ touch dcn-common/overrides.yaml
    $ touch dcn-common/central-export.yaml
    Copy to Clipboard Toggle word wrap

    The central-export.yaml file is created later by the openstack overcloud export command. It is in the dcn-common directory because all DCN deployments in this guide must use this file.

  3. Define the dcn0 site.

    $ mkdir dcn0
    $ touch dcn0/overrides.yaml
    Copy to Clipboard Toggle word wrap

To deploy more DCN sites, create additional dcn directories by number.

Note

The touch is used to provide an example of file organization. Each file must contain the appropriate content for successful deployments.

4.8. Retrieving the container images

Use the following procedure, and its example file contents, to retrieve the container images you need for deployments with separate heat stacks. You must ensure the container images for optional or edge-specific services are included by running the openstack container image prepare command with edge site’s environment files.

For more information, see Preparing container images.

Procedure

  1. Add your Registry Service Account credentials to containers.yaml.

    parameter_defaults:
      ContainerImagePrepare:
      - push_destination: true
        set:
          ceph_namespace: registry.redhat.io/rhceph
          ceph_image: rhceph-4-rhel8
          ceph_tag: latest
          name_prefix: openstack-
          namespace: registry.redhat.io/rhosp16-rhel8
          tag: latest
      ContainerImageRegistryCredentials:
        # https://access.redhat.com/RegistryAuthentication
        registry.redhat.io:
          registry-service-account-username: registry-service-account-password
    Copy to Clipboard Toggle word wrap
  2. Generate the environment file as images-env.yaml:

    sudo openstack tripleo container image prepare \
    -e containers.yaml \
    --output-env-file images-env.yaml
    Copy to Clipboard Toggle word wrap

    The resulting images-env.yaml file is included as part of the overcloud deployment procedure for the stack for which it is generated.

4.9. Creating fast datapath roles for the edge

To use fast datapath services at the edge, you must create a custom role that defines both fast datapath and edge services. When you create the roles file for deployment, you can include the newly created role that defines services needed for both distributed compute node architecture and fast datapath services such as DPDK or SR-IOV.

For example, create a custom role for distributedCompute with DPDK:

Prerequisites

A successful undercloud installation. For more information, see Installing the undercloud.

Procedure

  1. Log in to the undercloud host as the stack user.
  2. Copy the default roles directory:

    cp -r /usr/share/openstack-tripleo-heat-templates/roles ~/.
    Copy to Clipboard Toggle word wrap
  3. Create a new file named DistributedComputeDpdk.yaml from the DistributedCompute.yaml file:

    cp roles/DistributedCompute.yaml roles/DistributedComputeDpdk.yaml
    Copy to Clipboard Toggle word wrap
  4. Add DPDK services to the new DistributedComputeDpdk.yaml file. You can identify the parameters that you need to add by identifying the parameters in the ComputeOvsDpdk.yaml file that are not present in the DistributedComputeDpdk.yaml file.

    diff -u roles/DistributedComputeDpdk.yaml roles/ComputeOvsDpdk.yaml
    Copy to Clipboard Toggle word wrap

    In the output, the parameters that are preceded by + are present in the ComputeOvsDpdk.yaml file but are not present in the DistributedComputeDpdk.yaml file. Include these parameters in the new DistributedComputeDpdk.yaml file.

  5. Use the DistributedComputeDpdk.yaml to create a DistributedComputeDpdk roles file :

    openstack overcloud roles generate --roles-path ~/roles/ -o ~/roles/roles-custom.yaml DistributedComputeDpdk
    Copy to Clipboard Toggle word wrap

You can use this same method to create fast datapath roles for SR-IOV, or a combination of SR-IOV and DPDK for the edge to meet your requirements.

4.10. Configuring jumbo frames

Jumbo frames are frames with an MTU of 9,000. Jumbo frames are not mandatory for the Storage and Storage Management networks but the increase in MTU size improves storage performance. If you want to use jumbo frames, you must configure all network switch ports in the data path to support jumbo frames.

Important

Network configuration changes such as MTU settings must be completed during the initial deployment. They cannot be applied to an existing deployment.

Procedure

  1. Log in to the undercloud node as the stack user.
  2. Locate the network definition file.
  3. Modify the network definition file to extend the template to include the StorageMgmtIpSubnet and StorageMgmtNetworkVlanID attributes of the Storage Management network. Set the mtu attribute of the interfaces to 9000.

    The following is an example of implementing these interface settings:

    -
        type: interface
        name: em2
        use_dhcp: false
        mtu: 9000
    -
        type: vlan
        device: em2
        mtu: 9000
        use_dhcp: false
        vlan_id: {get_param: StorageMgmtNetworkVlanID}
        addresses:
        -
            ip_netmask: {get_param: StorageMgmtIpSubnet}
    -
        type: vlan
        device: em2
        mtu: 9000
        use_dhcp: false
        vlan_id: {get_param: StorageNetworkVlanID}
        addresses:
        -
            ip_netmask: {get_param: StorageIpSubnet}
    Copy to Clipboard Toggle word wrap
  4. Save the changes to the network definition file.

    Note

    All network switch ports between servers using the interface with the new MTU setting must be updated to support jumbo frames. If these switch changes are not made, problems will develop at the application layer that can cause the Red Hat Ceph Storage cluster to not reach quorum. If these settings are made and these problems are still observed, verify all hosts using the network configured for jumbo frames can communicate at the configured MTU setting. Use a command like the following example to perform this task:

    ping -M do -s 8972 172.16.1.11

If you are planning to deploy edge sites without block storage, see the following:

If you plan to deploy edge sites with Red Hat Ceph Storage, see the following:

Volver arriba
Red Hat logoGithubredditYoutubeTwitter

Aprender

Pruebe, compre y venda

Comunidades

Acerca de la documentación de Red Hat

Ayudamos a los usuarios de Red Hat a innovar y alcanzar sus objetivos con nuestros productos y servicios con contenido en el que pueden confiar. Explore nuestras recientes actualizaciones.

Hacer que el código abierto sea más inclusivo

Red Hat se compromete a reemplazar el lenguaje problemático en nuestro código, documentación y propiedades web. Para más detalles, consulte el Blog de Red Hat.

Acerca de Red Hat

Ofrecemos soluciones reforzadas que facilitan a las empresas trabajar en plataformas y entornos, desde el centro de datos central hasta el perímetro de la red.

Theme

© 2025 Red Hat