Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 4. Preparing overcloud templates for DCN deployment

download PDF

4.1. Prerequisites for using separate heat stacks

Your environment must meet the following prerequisites before you create a deployment using separate heat stacks:

  • An installed instance of Red Hat OpenStack Platform director 17.1.
  • For Ceph Storage users: access to Red Hat Ceph Storage 5.
  • For the central location: three nodes that are capable of serving as central Controller nodes. All three Controller nodes must be in the same heat stack. You cannot split Controller nodes, or any of the control plane services, across separate heat stacks.
  • Ceph storage is a requirement at the central location if you plan to deploy Ceph storage at the edge.
  • For each additional DCN site: three HCI compute nodes.
  • All nodes must be pre-provisioned or able to PXE boot from the central deployment network. You can use a DHCP relay to enable this connectivity for DCNs.
  • All nodes have been introspected by ironic.
  • Red Hat recommends leaving the <role>HostnameFormat parameter as the default value: %stackname%-<role>-%index%. If you do not include the %stackname% prefix, your overcloud uses the same hostnames for distributed compute nodes in different stacks. Ensure that your distributed compute nodes use the %stackname% prefix to distinguish nodes from different edge sites. For example, if you deploy two edge sites named dcn0 and dcn1, the stack name prefix helps you to distinguish between dcn0-distributedcompute-0 and dcn1-distributedcompute-0 when you run the openstack server list command on the undercloud.
  • Source the centralrc authentication file to schedule workloads at edge sites as well as at the central location. You do not require authentication files that are automatically generated for edge sites.

4.2. Limitations of the example separate heat stacks deployment

This document provides an example deployment that uses separate heat stacks on Red Hat OpenStack Platform. This example environment has the following limitations:

  • Spine/Leaf networking - The example in this guide does not demonstrate routing requirements, which are required in distributed compute node (DCN) deployments.
  • Ironic DHCP Relay - This guide does not include how to configure Ironic with a DHCP relay.

4.3. Designing your separate heat stacks deployment

To segment your deployment within separate heat stacks, you must first deploy a single overcloud with the control plane. You can then create separate stacks for the distributed compute node (DCN) sites. The following example shows separate stacks for different node types:

  • Controller nodes: A separate heat stack named central, for example, deploys the controllers. When you create new heat stacks for the DCN sites, you must create them with data from the central stack. The Controller nodes must be available for any instance management tasks.
  • DCN sites: You can have separate, uniquely named heat stacks, such as dcn0, dcn1, and so on. Use a DHCP relay to extend the provisioning network to the remote site.
Note

You must create a separate availability zone (AZ) for each stack.

4.4. Managing separate heat stacks

The procedures in this guide show how to organize the environment files for three heat stacks: central, dcn0, and dcn1. Red Hat recommends that you store the templates for each heat stack in a separate directory to keep the information about each deployment isolated.

Procedure

  1. Define the central heat stack:

    $ mkdir central
    $ touch central/overrides.yaml
  2. Extract data from the central heat stack into a common directory for all DCN sites:

    $ mkdir dcn-common
    $ touch dcn-common/overrides.yaml
  3. Define the dcn0 site.

    $ mkdir dcn0
    $ touch dcn0/overrides.yaml

To deploy more DCN sites, create additional dcn directories by number.

Note

The touch is used to provide an example of file organization. Each file must contain the appropriate content for successful deployments.

4.5. Retrieving the container images

Use the following procedure, and its example file contents, to retrieve the container images you need for deployments with separate heat stacks. You must ensure the container images for optional or edge-specific services are included by running the openstack container image prepare command with edge site’s environment files.

For more information, see Preparing container images in the Installing and managing Red Hat OpenStack Platform with director guide.

Procedure

  1. Add your Registry Service Account credentials to containers.yaml.

    parameter_defaults:
      ContainerImagePrepare:
      - push_destination: true
        set:
          ceph_namespace: registry.redhat.io/rhceph
          ceph_image: rhceph-6-rhel9
          ceph_tag: latest
          name_prefix: openstack-
          namespace: registry.redhat.io/rhosp17-rhel9
          tag: latest
      ContainerImageRegistryCredentials:
        # https://access.redhat.com/RegistryAuthentication
        registry.redhat.io:
          registry-service-account-username: registry-service-account-password
  2. Generate the environment file as images-env.yaml:

    sudo openstack tripleo container image prepare \
    -e containers.yaml \
    --output-env-file images-env.yaml

    The resulting images-env.yaml file is included as part of the overcloud deployment procedure for the stack for which it is generated.

4.6. Creating fast datapath roles for the edge

To use fast datapath services at the edge, you must create a custom role that defines both fast datapath and edge services. When you create the roles file for deployment, you can include the newly created role that defines services needed for both distributed compute node architecture and fast datapath services such as DPDK or SR-IOV.

For example, create a custom role for distributedCompute with DPDK:

Prerequisites

A successful undercloud installation. For more information, see Installing the undercloud.

Procedure

  1. Log in to the undercloud host as the stack user.
  2. Copy the default roles directory:

    cp -r /usr/share/openstack-tripleo-heat-templates/roles ~/.
  3. Create a new file named DistributedComputeDpdk.yaml from the DistributedCompute.yaml file:

    cp roles/DistributedCompute.yaml roles/DistributedComputeDpdk.yaml
  4. Add DPDK services to the new DistributedComputeDpdk.yaml file. You can identify the parameters that you need to add by identifying the parameters in the ComputeOvsDpdk.yaml file that are not present in the DistributedComputeDpdk.yaml file.

    diff -u roles/DistributedComputeDpdk.yaml roles/ComputeOvsDpdk.yaml

    In the output, the parameters that are preceded by + are present in the ComputeOvsDpdk.yaml file but are not present in the DistributedComputeDpdk.yaml file. Include these parameters in the new DistributedComputeDpdk.yaml file.

  5. Use the DistributedComputeDpdk.yaml to create a DistributedComputeDpdk roles file :

    openstack overcloud roles generate --roles-path ~/roles/ -o ~/roles/roles-custom.yaml DistributedComputeDpdk

You can use this same method to create fast datapath roles for SR-IOV, or a combination of SR-IOV and DPDK for the edge to meet your requirements.

If you are planning to deploy edge sites without block storage, see the following:

If you plan to deploy edge sites with Red Hat Ceph Storage, see the following:

Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.