Chapter 4. Configuring the overcloud
After you configure the undercloud, you can configure the remaining overcloud leaf networks with a series of configuration files. After you configure the remaining overcloud leaf networks and deploy the overcloud, the resulting deployment has multiple sets of networks with routing available.
4.1. Creating a network data file
To define the leaf networks, create a network data file that contains a YAML formatted list of each composable network and its attributes. Use the subnets
parameter to define the additional Leaf subnets with a base network.
Procedure
Create a new
network_data_spine_leaf.yaml
file in the home directory of thestack
user. Use the defaultnetwork_data_subnets_routed.yaml
file as a basis:$ cp /usr/share/openstack-tripleo-heat-templates/network_data_subnets_routed.yaml /home/stack/network_data_spine_leaf.yaml
In the
network_data_spine_leaf.yaml
file, edit the YAML list to define each base network and respective leaf subnets as a composable network item. Use the following example syntax to define a base leaf and two leaf subnets:- name: <base_name> name_lower: <lowercase_name> vip: <true/false> vlan: '<vlan_id>' ip_subnet: '<network_address>/<prefix>' allocation_pools: [{'start': '<start_address>', 'end': '<end_address>'}] gateway_ip: '<router_ip_address>' subnets: <leaf_subnet_name>: vlan: '<vlan_id>' ip_subnet: '<network_address>/<prefix>' allocation_pools: [{'start': '<start_address>', 'end': '<end_address>'}] gateway_ip: '<router_ip_address>' <leaf_subnet_name>: vlan: '<vlan_id>' ip_subnet: '<network_address>/<prefix>' allocation_pools: [{'start': '<start_address>', 'end': '<end_address>'}] gateway_ip: '<router_ip_address>'
The following example demonstrates how to define the Internal API network and its leaf networks:
- name: InternalApi name_lower: internal_api vip: true vlan: 10 ip_subnet: '172.18.0.0/24' allocation_pools: [{'start': '172.18.0.4', 'end': '172.18.0.250'}] gateway_ip: '172.18.0.1' subnets: internal_api_leaf1: vlan: 11 ip_subnet: '172.18.1.0/24' allocation_pools: [{'start': '172.18.1.4', 'end': '172.18.1.250'}] gateway_ip: '172.18.1.1' internal_api_leaf2: vlan: 12 ip_subnet: '172.18.2.0/24' allocation_pools: [{'start': '172.18.2.4', 'end': '172.18.2.250'}] gateway_ip: '172.18.2.1'
You do not define the Control Plane networks in the network data file because the undercloud has already created these networks. However, you must set the parameters manually so that the overcloud can configure the NICs accordingly.
Define vip: true
for the networks that contain the Controller-based services. In this example, InternalApiLeaf0
contains these services.
4.2. Creating a roles data file
To define each composable role for each leaf and attach the composable networks to each respective role, complete the following steps.
Procedure
Create a custom
roles
directory in the home directory of thestack
user:$ mkdir ~/roles
Copy the default Controller, Compute, and Ceph Storage roles from the director core template collection to the roles directory. Rename the files for Compute and Ceph Storage to suit Leaf 0:
$ cp /usr/share/openstack-tripleo-heat-templates/roles/Controller.yaml ~/roles/Controller.yaml $ cp /usr/share/openstack-tripleo-heat-templates/roles/Compute.yaml ~/roles/Compute0.yaml $ cp /usr/share/openstack-tripleo-heat-templates/roles/CephStorage.yaml ~/roles/CephStorage0.yaml
Copy the Leaf 0 Compute and Ceph Storage files as a basis for your Leaf 1 and Leaf 2 files:
$ cp ~/roles/Compute0.yaml ~/roles/Compute1.yaml $ cp ~/roles/Compute0.yaml ~/roles/Compute2.yaml $ cp ~/roles/CephStorage0.yaml ~/roles/CephStorage1.yaml $ cp ~/roles/CephStorage0.yaml ~/roles/CephStorage2.yaml
Edit the
name
,HostnameFormatDefault
, anddeprecated_nic_config_name
parameters in the Leaf 0, Leaf 1, and Leaf 2 files so that they align with the respective Leaf parameters. For example, the parameters in the Leaf 0 Compute file have the following values:- name: ComputeLeaf0 HostnameFormatDefault: '%stackname%-compute-leaf0-%index%' deprecated_nic_config_name: 'computeleaf0.yaml'
The Leaf 0 Ceph Storage parameters have the following values:
- name: CephStorageLeaf0 HostnameFormatDefault: '%stackname%-cephstorage-leaf0-%index%' deprecated_nic_config_name: 'ceph-strorageleaf0.yaml'
Edit the
network
parameter in the Leaf 1 and Leaf 2 files so that they align with the respective Leaf network parameters. For example, the parameters in the Leaf 1 Compute file have the following values:- name: ComputeLeaf1 networks: InternalApi: subnet: internal_api_leaf1 Tenant: subnet: tenant_leaf1 Storage: subnet: storage_leaf1
The Leaf 1 Ceph Storage parameters have the following values:
- name: CephStorageLeaf1 networks: Storage: subnet: storage_leaf1 StorageMgmt: subnet: storage_mgmt_leaf1
NoteThis applies only to Leaf 1 and Leaf 2. The
network
parameter for Leaf 0 retains the base subnet values, which are the lowercase names of each subnet combined with a_subnet
suffix. For example, the Internal API for Leaf 0 isinternal_api_subnet
.When your role configuration is complete, run the following command to generate the full roles data file:
$ openstack overcloud roles generate --roles-path ~/roles -o roles_data_spine_leaf.yaml Controller Compute Compute1 Compute2 CephStorage CephStorage1 CephStorage2
This creates a full
roles_data_spine_leaf.yaml
file that includes all of the custom roles for each respective leaf network.
Each role has its own NIC configuration. Before you configure the spine-leaf configuration, you must create a base set of NIC templates to suit your current NIC configuration.
4.3. Creating a custom NIC configuration
Each role requires a unique NIC configuration. Complete the following steps to create a copy of the base set of NIC templates and map the new templates to the respective NIC configuration resources.
Procedure
Change to the core heat template directory:
$ cd /usr/share/openstack-tripleo-heat-templates
Render the Jinja2 templates with the
tools/process-templates.py
script, your customnetwork_data
file, and customroles_data
file:$ tools/process-templates.py \ -n /home/stack/network_data_spine_leaf.yaml \ -r /home/stack/roles_data_spine_leaf.yaml \ -o /home/stack/openstack-tripleo-heat-templates-spine-leaf
Change to the home directory:
$ cd /home/stack
Copy the content from one of the default NIC templates to use as a basis for your spine-leaf templates. For example, copy the
single-nic-vlans
NIC template:$ cp -r openstack-tripleo-heat-templates-spine-leaf/network/config/single-nic-vlans/* /home/stack/templates/spine-leaf-nics/.
Edit each NIC configuration in
/home/stack/templates/spine-leaf-nics/
and change the location of the configuration script to an absolute location. Scroll to the network configuration section, which resembles the following snippet:resources: OsNetConfigImpl: type: OS::Heat::SoftwareConfig properties: group: script config: str_replace: template: get_file: ../../scripts/run-os-net-config.sh params: $network_config: network_config:
Change the location of the script to the absolute path:
resources: OsNetConfigImpl: type: OS::Heat::SoftwareConfig properties: group: script config: str_replace: template: get_file: /usr/share/openstack-tripleo-heat-templates/network/scripts/run-os-net-config.sh params: $network_config: network_config:
Make this change in each file for each Leaf and save the changes.
NoteFor further NIC changes, see "Custom network interface templates" in the Advanced Overcloud Customization guide.
-
Create a file called
spine-leaf-nics.yaml
and edit the file. Create a
resource_registry
section in the file and add a set of*::Net::SoftwareConfig
resources that map to the respective NIC templates:resource_registry: OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/templates/spine-leaf-nics/controller.yaml OS::TripleO::ComputeLeaf0::Net::SoftwareConfig: /home/stack/templates/spine-leaf-nics/computeleaf0.yaml OS::TripleO::ComputeLeaf1::Net::SoftwareConfig: /home/stack/templates/spine-leaf-nics/computeleaf1.yaml OS::TripleO::ComputeLeaf2::Net::SoftwareConfig: /home/stack/templates/spine-leaf-nics/computeleaf2.yaml OS::TripleO::CephStorageLeaf0::Net::SoftwareConfig: /home/stack/templates/spine-leaf-nics/ceph-storageleaf0.yaml OS::TripleO::CephStorageLeaf1::Net::SoftwareConfig: /home/stack/templates/spine-leaf-nics/ceph-storageleaf1.yaml OS::TripleO::CephStorageLeaf2::Net::SoftwareConfig: /home/stack/templates/spine-leaf-nics/ceph-storageleaf2.yaml
These resources mappings override the default resource mappings during deployment.
-
Save the
spine-leaf-nics.yaml
file. Remove the rendered template directory:
$ rm -rf openstack-tripleo-heat-templates-spine-leaf
As a result of this procedure, you now have a set of NIC templates and an environment file that maps the required *::Net::SoftwareConfig
resources to them. When you eventually run the openstack overcloud deploy
command, ensure that you include the environment files in the following order:
-
/usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml
, which enables network isolation. Note that the director renders this file from thenetwork-isolation.j2.yaml
Jinja2 template. -
/usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml
, which is the default network environment file, including default NIC resource mappings. Note that the director renders this file from thenetwork-environment.j2.yaml
Jinja2 template. -
/home/stack/templates/spine-leaf-nics.yaml
, which contains your custom NIC resource mappings and overrides the default NIC resource mappings.
The following command snippet demonstrates the ordering:
$ openstack overcloud deploy --templates ... -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml \ -e /home/stack/templates/spine-leaf-nics.yaml \ ...
Resources
- For more information about customizing your NIC templates, see "Custom network interface templates" in the Advanced Overcloud Customization guide .
Complete the procedures in the following sections to add details to your network environment file and define certain aspects of the spine leaf architecture. After you complete this configuration, include this file in the openstack overcloud deploy
command.
4.4. Setting control plane parameters
You usually define networking details for isolated spine-leaf networks using a network_data
file. The exception is the control plane network, which the undercloud creates. However, the overcloud requires access to the control plane for each leaf. To enable this access, you must include additional parameters in your network-environment.yaml
file.
In this example, define the IP, subnet, and default route for the respective Control Plane network on Leaf 0.
Procedure
-
Create a file called
spine-leaf-ctlplane.yaml
and edit the file. Create a
parameter_defaults
section in the file and add the control plane subnet mapping for each spine-leaf network:parameter_defaults: ... ControllerControlPlaneSubnet: leaf0 Compute0ControlPlaneSubnet: leaf0 Compute1ControlPlaneSubnet: leaf1 Compute2ControlPlaneSubnet: leaf2 CephStorage0ControlPlaneSubnet: leaf0 CephStorage1ControlPlaneSubnet: leaf1 CephStorage2ControlPlaneSubnet: leaf2
-
Save the
network-environment.yaml
file.
4.5. Setting the subnet for virtual IP addresses
The Controller role typically hosts virtual IP (VIP) addresses for each network. By default, the overcloud takes the VIPs from the base subnet of each network except for the control plane. The control plane uses ctlplane-subnet
, which is the default subnet name created during a standard undercloud installation.
In this spine leaf scenario, the default base provisioning network is leaf0
instead of ctlplane-subnet
. This means that you must add overriding values to the VipSubnetMap
parameter to change the subnet that the control plane VIP uses.
Additionally, if the VIPs for each network do not use the base subnet of one or more networks, you must add additional overrides to the VipSubnetMap
parameter to ensure that the director creates VIPs on the subnet associated with the L2 network segment that connects the Controller nodes.
Procedure:
-
Create a file called
spine-leaf-vips.yaml
and edit the file. Create a
parameter_defaults
section in the file and add theVipSubnetMap
parameter based on your requirements:If you use
leaf0
for the provisioning / control plane network, set thectlplane
VIP remapping toleaf0
:parameter_defaults: VipSubnetMap: ctlplane: leaf0
If you use a different Leaf for multiple VIPs, set the VIP remapping to suit these requirements. For example, use the following snippet to configure the
VipSubnetMap
parameter to useleaf1
for all VIPs:parameter_defaults: VipSubnetMap: ctlplane: leaf1 redis: internal_api_leaf1 InternalApi: internal_api_leaf1 Storage: storage_leaf1 StorageMgmt: storage_mgmt_leaf1
-
Save the
spine-leaf-vips.yaml
file.
4.6. Mapping separate networks
By default, OpenStack Platform uses Open Virtual Network (OVN), which requires that all Controller and Compute nodes connect to a single L2 network for external network access. This means that both Controller and Compute network configurations use a br-ex
bridge, which director maps to the datacentre
network in the overcloud by default. This mapping is usually either for a flat network mapping or a VLAN network mapping. In a spine leaf architecture, you can change these mappings so that each Leaf routes traffic through the specific bridge or VLAN on that Leaf, which is often the case with edge computing scenarios.
Procedure
-
Create a file called
spine-leaf-separate.yaml
and edit the file. Create a
parameter_defaults
section in thespine-leaf-separate.yaml
file and include the external network mapping for each spine-leaf network:For flat network mappings, list each Leaf in the
NeutronFlatNetworks
parameter and set theNeutronBridgeMappings
parameter for each Leaf:parameter_defaults: NeutronFlatNetworks: leaf0,leaf1,leaf2 Controller0Parameters: NeutronBridgeMappings: "leaf0:br-ex" Compute0Parameters: NeutronBridgeMappings: "leaf0:br-ex" Compute1Parameters: NeutronBridgeMappings: "leaf1:br-ex" Compute2Parameters: NeutronBridgeMappings: "leaf2:br-ex"
For VLAN network mappings, additionally set the
NeutronNetworkVLANRanges
to map VLANs for all three Leaf networks:NeutronNetworkType: 'geneve,vlan' NeutronNetworkVLANRanges: 'leaf0:1:1000,leaf1:1:1000,leaf2:1:1000'
-
Save the
spine-leaf-separate.yaml
file.
4.7. Deploying a spine-leaf enabled overcloud
When you have completed your spine-leaf overcloud configuration, complete the following steps to review each file and then run the deployment command:
Procedure
Review the
/home/stack/template/network_data_spine_leaf.yaml
file and ensure that it contains each network and subnet for each leaf.NoteThere is currently no automatic validation for the network subnet and
allocation_pools
values. Ensure that you define these values consistently and that there is no conflict with existing networks.-
Review the
/home/stack/templates/roles_data_spine_leaf.yaml
values and ensure that you define a role for each leaf. -
Review the NIC templates in the
~/templates/spine-leaf-nics/
directory and ensure that you define the interfaces for each role on each leaf correctly. -
Review the custom
spine-leaf-nics.yaml
environment file and ensure that it contains aresource_registry
section that references the custom NIC templates for each role. -
Review the
/home/stack/templates/nodes_data.yaml
file and ensure that all roles have an assigned flavor and a node count. Also check that you have correctly tagged all nodes for each leaf. Run the
openstack overcloud deploy
command to apply the spine leaf configuration. For example:$ openstack overcloud deploy --templates \ -n /home/stack/templates/network_data_spine_leaf.yaml \ -r /home/stack/templates/roles_data_spine_leaf.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml \ -e /home/stack/templates/spine-leaf-nics.yaml \ -e /home/stack/templates/spine-leaf-ctlplane.yaml \ -e /home/stack/templates/spine-leaf-vips.yaml \ -e /home/stack/templates/spine-leaf-separate.yaml \ -e /home/stack/templates/nodes_data.yaml \ -e [OTHER ENVIRONMENT FILES]
-
The
network-isolation.yaml
is the rendered name of the Jinja2 file in the same location (network-isolation.j2.yaml
). Include this file in the deployment command to ensure that the director isolates each networks to the correct leaf. This ensures that the networks are created dynamically during the overcloud creation process. -
Include the
network-environment.yaml
file after thenetwork-isolation.yaml
. Thenetwork-environment.yaml
file provides the default network configuration for composable network parameters. -
Include the
spine-leaf-nics.yaml
file after thenetwork-environment.yaml
. Thespine-leaf-nics.yaml
file overrides the default NIC template mappings from thenetwork-environment.yaml
file. -
If you created any other spine leaf network environment files, include these environment files after the
spine-leaf-nics.yaml
file. - Add any additional environment files. For example, an environment file with your container image locations or Ceph cluster configuration.
-
The
- Wait until the spine-leaf enabled overcloud deploys.