Chapter 3. Configuring and deploying a multi-cell environment with routed networks


Important

The content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.

To configure your Red Hat OpenStack (RHOSP) deployment to handle multiple cells with routed networks, you must perform the following tasks:

  1. Prepare the control plane for cell network routing on the overcloud stack.
  2. Extract parameter information from the control plane of the overcloud stack.
  3. Configure the cell network routing on the cell stacks.
  4. Create cell roles files for each stack. You can use the default Compute role as a base for the Compute nodes in a cell, and the dedicated CellController role as a base for the cell controller node. You can also create custom roles for use in your multi-cell environment. For more information on creating custom roles, see Composable services and custom roles.
  5. Designate a host for each custom role you create.

    Note

    This procedure is for an environment with a single control plane network. If your environment has multiple control plane networks, such as a spine leaf network environment, then you must also designate a host for each role in each leaf network so that you can tag nodes into each leaf. For more information, see Designating a role for leaf nodes.

  6. Configure each cell.
  7. Deploy each cell stack.

3.1. Prerequisites

3.2. Preparing the control plane and default cell for cell network routing

You must configure routes on the overcloud stack for the overcloud stack to communicate with the cells. To achieve this, create a network data file that defines all networks and subnets in the main stack, and use this file to deploy both the overcloud stack and the cell stacks.

Procedure

  1. Log in to the undercloud as the stack user.
  2. Source the stackrc file:

    [stack@director ~]$ source ~/stackrc
    Copy to Clipboard
  3. Create a new directory for the common stack configuration:

    (undercloud)$ mkdir common
    Copy to Clipboard
  4. Copy the default network_data_subnets_routed.yaml file to your common directory to add a composable network for your overcloud stack:

    (undercloud)$ cp /usr/share/openstack-tripleo-heat-templates/network_data_subnets_routed.yaml ~/common/network_data_routed_multi_cell.yaml
    Copy to Clipboard

    For more information on composable networks, see Composable networks in the Director installation and usage guide.

  5. Update the configuration in /common/network_data_routed_multi_cell.yaml for your network, and update the cell subnet names for easy identification, for example, change internal_api_leaf1 to internal_api_cell1.
  6. Ensure that the interfaces in the NIC template for each role include <network_name>InterfaceRoutes, for example:

                -
                   type: vlan
                   vlan_id:
                       get_param: InternalApiNetworkVlanID
                   addresses:
                   -
                     ip_netmask:
                         get_param: InternalApiIpSubnet
                   routes:
                     get_param: InternalApiInterfaceRoutes
    Copy to Clipboard
  7. Add the network_data_routed_multi_cell.yaml file to the overcloud stack with your other environment files and deploy the overcloud:

    (undercloud)$ openstack overcloud deploy --templates \
     --stack overcloud \
     -e [your environment files]
     -n /home/stack/common/network_data_routed_multi_cell.yaml \
     -e /home/stack/templates/overcloud-baremetal-deployed.yaml \
     -e /home/stack/templates/overcloud-networks-deployed.yaml \
     -e /home/stack/templates/overcloud-vip-deployed.yaml
    Copy to Clipboard

3.3. Extracting parameter information from the overcloud stack control plane

Extract parameter information from the first cell, named default, in the basic overcloud stack.

Procedure

  1. Log in to the undercloud as the stack user.
  2. Source the stackrc file:

    [stack@director ~]$ source ~/stackrc
    Copy to Clipboard
  3. Export the cell configuration and password information from the default cell in the overcloud stack to a new common environment file for the multi-cell deployment:

    (undercloud)$ openstack overcloud cell export --control-plane-stack overcloud \
     -f --output-file common/default_cell_export.yaml \
     --working-dir /home/stack/overcloud-deploy/overcloud/
    Copy to Clipboard

    This command exports the EndpointMap, HostsEntry, AllNodesConfig, GlobalConfig parameters, and the password information, to the common environment file.

    Tip

    If the environment file already exists, enter the command with the --force-overwrite or -f option.

3.4. Creating cell roles files for routed networks

When each stack uses a different network, create a cell roles file for each cell stack that includes a custom cell role.

Note

You must create a flavor for each custom role. For more information, see Designating hosts for cell roles.

Procedure

  1. Generate a new roles data file that includes the CellController role, along with other roles you need for the cell stack. The following example generates the roles data file cell1_roles_data.yaml, which includes the roles CellController and Compute:

    (undercloud)$ openstack overcloud roles generate \
      --roles-path /usr/share/openstack-tripleo-heat-templates/roles \
      -o cell1/cell1_roles_data.yaml \
      Compute:ComputeCell1 \
      CellController:CellControllerCell1
    Copy to Clipboard
  2. Add the HostnameFormatDefault to each role definition in your new cell roles file:

    - name: ComputeCell1
      ...
      HostnameFormatDefault: '%stackname%-compute-cell1-%index%'
      ServicesDefault:
      ...
      networks:
      ...
    - name: CellControllerCell1
      ...
      HostnameFormatDefault: '%stackname%-cellcontrol-cell1-%index%'
      ServicesDefault:
      ...
      networks:
      ...
    Copy to Clipboard
  3. Add the Networking service (neutron) DHCP and Metadata agents to the ComputeCell1 and CellControllerCell1 roles, if they are not already present:

    - name: ComputeCell1
      ...
      HostnameFormatDefault: '%stackname%-compute-cell1-%index%'
      ServicesDefault:
      - OS::TripleO::Services::NeutronDhcpAgent
      - OS::TripleO::Services::NeutronMetadataAgent
      ...
      networks:
      ...
    - name: CellControllerCell1
      ...
      HostnameFormatDefault: '%stackname%-cellcontrol-cell1-%index%'
      ServicesDefault:
      - OS::TripleO::Services::NeutronDhcpAgent
      - OS::TripleO::Services::NeutronMetadataAgent
      ...
      networks:
      ...
    Copy to Clipboard
  4. Add the subnets you configured in network_data_routed_multi_cell.yaml to the ComputeCell1 and CellControllerCell1 roles:

    - name: ComputeCell1
      ...
      networks:
        InternalApi:
          subnet: internal_api_subnet_cell1
        Tenant:
          subnet: tenant_subnet_cell1
        Storage:
          subnet: storage_subnet_cell1
    ...
    - name: CellControllerCell1
      ...
      networks:
        External:
          subnet: external_subnet
        InternalApi:
          subnet: internal_api_subnet_cell1
        Storage:
          subnet: storage_subnet_cell1
        StorageMgmt:
          subnet: storage_mgmt_subnet_cell1
        Tenant:
          subnet: tenant_subnet_cell1
    Copy to Clipboard

3.5. Designating hosts for cell roles

To designate a bare-metal node for a cell role, you must configure the bare-metal node with a resource class to use to tag the node for the cell role. Perform the following procedure to create a bare-metal resource class for the cellcontrollercell1 role. Repeat this procedure for each custom role, by substituting the cell controller names with the name of your custom role.

Note

The following procedure applies to new overcloud nodes that have not yet been provisioned. To assign a resource class to an existing overcloud node that has already been provisioned, scale down the overcloud to unprovision the node, then scale up the overcloud to reprovision the node with the new resource class assignment. For more information, see Scaling overcloud nodes.

Procedure

  1. Register the bare-metal node for the cellcontrollercell1 role by adding it to your node definition template: node.json or node.yaml. For more information, see Registering nodes for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide.
  2. Inspect the node hardware:

    (undercloud)$ openstack overcloud node introspect \
     --all-manageable --provide
    Copy to Clipboard

    For more information, see Creating an inventory of the bare-metal node hardware in the Installing and managing Red Hat OpenStack Platform with director guide.

  3. Retrieve a list of your nodes to identify their UUIDs:

    (undercloud)$ openstack baremetal node list
    Copy to Clipboard
  4. Tag each bare-metal node that you want to designate as a cell controller with a custom cell controller resource class:

    (undercloud)$ openstack baremetal node set \
     --resource-class baremetal.CELL-CONTROLLER <node>
    Copy to Clipboard
    • Replace <node> with the name or UUID of the bare-metal node.
  5. Add the cellcontrollercell1 role to your node definition file, overcloud-baremetal-deploy.yaml, and define any predictive node placements, resource classes, network topologies, or other attributes that you want to assign to your nodes:

    - name: cellcontrollercell1
      count: 1
      defaults:
        resource_class: baremetal.CELL1-CONTROLLER
        network_config:
          template: /home/stack/templates/nic-config/<role_topology_file>
      instances:
      - hostname: cell1-cellcontroller-%index%
        name: cell1controller
    Copy to Clipboard
    • Replace <role_topology_file> with the name of the network topology file to use for the cellcontrollercell1 role, for example, cell1_controller_net_top.j2. You can reuse an existing network topology or create a new custom network interface template for the role or cell. For more information, see Custom network interface templates in the Installing and managing Red Hat OpenStack Platform with director guide. To use the default network definition settings, do not include network_config in the role definition.

      For more information about the properties that you can use to configure node attributes in your node definition file, see Bare-metal node provisioning attributes. For an example node definition file, see Example node definition file.

  6. Provision the new nodes for your role:

    (undercloud)$ openstack overcloud node provision \
    [--stack <stack>] \
    [--network-config \]
    --output <deployment_file> \
    /home/stack/templates/overcloud-baremetal-deploy.yaml
    Copy to Clipboard
    • Optional: Replace <stack> with the name of the stack for which the bare-metal nodes are provisioned. Defaults to overcloud.
    • Optional: Include the --network-config optional argument to provide the network definitions to the cli-overcloud-node-network-config.yaml Ansible playbook. If you have not defined the network definitions in the node definition file by using the network_config property, then the default network definitions are used.
    • Replace <deployment_file> with the name of the heat environment file to generate for inclusion in the deployment command, for example /home/stack/templates/overcloud-baremetal-deployed.yaml.
  7. Monitor the provisioning progress in a separate terminal. When provisioning is successful, the node state changes from available to active:

    (undercloud)$ watch openstack baremetal node list
    Copy to Clipboard
  8. If you ran the provisioning command without the --network-config option, then configure the <Role>NetworkConfigTemplate parameters in your network-environment.yaml file to point to your NIC template files:

    parameter_defaults:
       ComputeNetworkConfigTemplate: /home/stack/templates/nic-configs/compute.j2
       CellControllerCell1NetworkConfigTemplate: /home/stack/templates/nic-configs/<role_topology_file>
       ControllerNetworkConfigTemplate: /home/stack/templates/nic-configs/controller.j2
    Copy to Clipboard
    • Replace <role_topology_file> with the name of the file that contains the network topology of the cellcontrollercell1 role, for example, cell1_controller_net_top.j2. Set to controller.j2 to use the default network topology.

3.6. Configuring and deploying each cell stack with routed networks

Perform the following procedure to configure one cell stack, cell1. Repeat the procedure for each additional cell stack you want to deploy until all your cell stacks are deployed.

Procedure

  1. Create a new environment file for the additional cell in the cell directory for cell-specific parameters, for example, /home/stack/cell1/cell1.yaml.
  2. Add the following parameters to the environment file:

    resource_registry:
      OS::TripleO::CellControllerCell1::Net::SoftwareConfig: /home/stack/templates/nic-configs/cellcontroller.yaml
      OS::TripleO::ComputeCell1::Net::SoftwareConfig: /home/stack/templates/nic-configs/compute.yaml
    
    parameter_defaults:
      # Specify that this is an additional cell
      NovaAdditionalCell: True
    
      # Enable local metadata API for each cell
      NovaLocalMetadataPerCell: True
    
      #Disable network creation in order to use the `network_data.yaml` file from the overcloud stack,
      # and create ports for the nodes in the separate stacks on the existing networks.
      ManageNetworks: false
    
      # Specify that this is an additional cell
      NovaAdditionalCell: True
    
      # The DNS names for the VIPs for the cell
      CloudDomain: redhat.local
      CloudName: cell1.redhat.local
      CloudNameInternal: cell1.internalapi.redhat.local
      CloudNameStorage: cell1.storage.redhat.local
      CloudNameStorageManagement: cell1.storagemgmt.redhat.local
      CloudNameCtlplane: cell1.ctlplane.redhat.local
    Copy to Clipboard
  3. To run the Compute metadata API in each cell instead of in the global Controller, add the following parameter to your cell environment file:

    parameter_defaults:
      NovaLocalMetadataPerCell: True
    Copy to Clipboard
  4. Add the virtual IP address (VIP) information for the cell to your cell environment file:

    parameter_defaults:
      ...
      VipSubnetMap:
        InternalApi: internal_api_cell1
        Storage: storage_cell1
        StorageMgmt: storage_mgmt_cell1
        External: external_subnet
    Copy to Clipboard

    This creates virtual IP addresses on the subnet associated with the L2 network segment that the cell Controller nodes are connected to.

  5. Add the environment files to the stack with your other environment files and deploy the cell stack:

    (undercloud)$ openstack overcloud deploy --templates \
     --stack cell1 \
     -e [your environment files] \
     -e /home/stack/templates/overcloud-baremetal-deployed.yaml \
     -e /home/stack/templates/overcloud-networks-deployed.yaml \
     -e /home/stack/templates/overcloud-vip-deployed.yaml \
     -r /home/stack/cell1/cell1_roles_data.yaml \
     -n /home/stack/common/network_data_spine_leaf.yaml \
     -e /home/stack/common/default_cell_export.yaml \
     -e /home/stack/cell1/cell1.yaml
    Copy to Clipboard

3.7. Adding a new cell subnet after deployment

To add a new cell subnet to your overcloud stack after you have deployed your multi-cell environment, you must update the value of NetworkDeploymentActions to include 'UPDATE'.

Procedure

  1. Add the following configuration to an environment file for the overcloud stack to update the network configuration with the new cell subnet:

    parameter_defaults:
      NetworkDeploymentActions: ['CREATE','UPDATE']
    Copy to Clipboard
  2. Add the configuration for the new cell subnet to /common/network_data_routed_multi_cell.yaml.
  3. Deploy the overcloud stack:

    (undercloud)$ openstack overcloud deploy --templates \
     --stack overcloud \
     -n /home/stack/common/network_data_routed_multi_cell.yaml \
     -e [your environment files]
    Copy to Clipboard
  4. Optional: Reset NetworkDeploymentActions to the default for the next deployment:

    parameter_defaults:
      NetworkDeploymentActions: ['CREATE']
    Copy to Clipboard
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat