Chapter 2. Configuring routed spine-leaf in the undercloud
This section describes a use case about how to configure the undercloud to accommodate routed spine-leaf with composable networks.
2.1. Configuring the spine leaf provisioning networks
To configure the provisioning networks for your spine leaf infrastructure, edit the undercloud.conf
file and set the relevant parameters included in the following procedure.
Procedure
-
Log in to the undercloud as the
stack
user. If you do not already have an
undercloud.conf
file, copy the sample template file:[stack@director ~]$ cp /usr/share/python-tripleoclient/undercloud.conf.sample ~/undercloud.conf
-
Edit the
undercloud.conf
file. Set the following values in the
[DEFAULT]
section:Set
local_ip
to the undercloud IP onleaf0
:local_ip = 192.168.10.1/24
Set
undercloud_public_vip
to the externally facing IP address of the undercloud:undercloud_public_vip = 10.1.1.1
Set
undercloud_admin_vip
to the administration IP address of the undercloud. This IP address is usually on leaf0:undercloud_admin_vip = 192.168.10.2
Set
local_interface
to the interface to bridge for the local network:local_interface = eth1
Set
enable_routed_networks
totrue
:enable_routed_networks = true
Define your list of subnets using the
subnets
parameter. Define one subnet for each L2 segment in the routed spine and leaf:subnets = leaf0,leaf1,leaf2
Specify the subnet associated with the physical L2 segment local to the undercloud using the
local_subnet
parameter:local_subnet = leaf0
Set the value of
undercloud_nameservers
.undercloud_nameservers = 10.11.5.19,10.11.5.20
TipYou can find the current IP addresses of the DNS servers that are used for the undercloud nameserver by looking in /etc/resolv.conf.
Create a new section for each subnet that you define in the
subnets
parameter:[leaf0] cidr = 192.168.10.0/24 dhcp_start = 192.168.10.10 dhcp_end = 192.168.10.90 inspection_iprange = 192.168.10.100,192.168.10.190 gateway = 192.168.10.1 masquerade = False [leaf1] cidr = 192.168.11.0/24 dhcp_start = 192.168.11.10 dhcp_end = 192.168.11.90 inspection_iprange = 192.168.11.100,192.168.11.190 gateway = 192.168.11.1 masquerade = False [leaf2] cidr = 192.168.12.0/24 dhcp_start = 192.168.12.10 dhcp_end = 192.168.12.90 inspection_iprange = 192.168.12.100,192.168.12.190 gateway = 192.168.12.1 masquerade = False
-
Save the
undercloud.conf
file. Run the undercloud installation command:
[stack@director ~]$ openstack undercloud install
This configuration creates three subnets on the provisioning network or control plane. The overcloud uses each network to provision systems within each respective leaf.
To ensure proper relay of DHCP requests to the undercloud, you might need to configure a DHCP relay.
2.2. Configuring a DHCP relay
The undercloud uses two DHCP servers on the provisioning network:
- An introspection DHCP server.
- A provisioning DHCP server.
When you configure a DHCP relay, ensure that you forward DHCP requests to both DHCP servers on the undercloud.
You can use UDP broadcast with devices that support it to relay DHCP requests to the L2 network segment where the undercloud provisioning network is connected. Alternatively, you can use UDP unicast, which relays DHCP requests to specific IP addresses.
Configuration of DHCP relay on specific device types is beyond the scope of this document. As a reference, this document provides a DHCP relay configuration example using the implementation in ISC DHCP software. For more information, see manual page dhcrelay(8).
Broadcast DHCP relay
This method relays DHCP requests using UDP broadcast traffic onto the L2 network segment where the DHCP server or servers reside. All devices on the network segment receive the broadcast traffic. When using UDP broadcast, both DHCP servers on the undercloud receive the relayed DHCP request. Depending on the implementation, you can configure this by specifying either the interface or IP network address:
- Interface
- Specify an interface that is connected to the L2 network segment where the DHCP requests are relayed.
- IP network address
- Specify the network address of the IP network where the DHCP requests are relayed.
Unicast DHCP relay
This method relays DHCP requests using UDP unicast traffic to specific DHCP servers. When you use UDP unicast, you must configure the device that provides the DHCP relay to relay DHCP requests to both the IP address that is assigned to the interface used for introspection on the undercloud and the IP address of the network namespace that the OpenStack Networking (neutron) service creates to host the DHCP service for the ctlplane
network.
The interface used for introspection is the one defined as inspection_interface
in the undercloud.conf
file. If you have not set this parameter, the default interface for the undercloud is br-ctlplane
.
It is common to use the br-ctlplane
interface for introspection. The IP address that you define as the local_ip
in the undercloud.conf
file is on the br-ctlplane
interface.
The IP address allocated to the Neutron DHCP namespace is the first address available in the IP range that you configure for the local_subnet
in the undercloud.conf
file. The first address in the IP range is the one that you define as dhcp_start
in the configuration. For example, 192.168.10.10
is the IP address if you use the following configuration:
[DEFAULT] local_subnet = leaf0 subnets = leaf0,leaf1,leaf2 [leaf0] cidr = 192.168.10.0/24 dhcp_start = 192.168.10.10 dhcp_end = 192.168.10.90 inspection_iprange = 192.168.10.100,192.168.10.190 gateway = 192.168.10.1 masquerade = False
The IP address for the DHCP namespace is automatically allocated. In most cases, this address is the first address in the IP range. To verify that this is the case, run the following commands on the undercloud:
$ openstack port list --device-owner network:dhcp -c "Fixed IP Addresses" +----------------------------------------------------------------------------+ | Fixed IP Addresses | +----------------------------------------------------------------------------+ | ip_address='192.168.10.10', subnet_id='7526fbe3-f52a-4b39-a828-ec59f4ed12b2' | +----------------------------------------------------------------------------+ $ openstack subnet show 7526fbe3-f52a-4b39-a828-ec59f4ed12b2 -c name +-------+--------+ | Field | Value | +-------+--------+ | name | leaf0 | +-------+--------+
Example dhcrelay configuration
In the following example, the dhcrelay
command in the dhcp
package uses the following configuration:
-
Interfaces to relay incoming DHCP request:
eth1
,eth2
, andeth3
. -
Interface the undercloud DHCP servers on the network segment are connected to:
eth0
. -
The DHCP server used for introspection is listening on IP address:
192.168.10.1
. -
The DHCP server used for provisioning is listening on IP address
192.168.10.10
.
This results in the following dhcrelay
command:
$ sudo dhcrelay -d --no-pid 192.168.10.10 192.168.10.1 \ -i eth0 -i eth1 -i eth2 -i eth3
Example Cisco IOS routing switch configuration
This example uses the following Cisco IOS configuration to perform the following tasks:
- Configure a VLAN to use for the provisioning network.
- Add the IP address of the leaf.
-
Forward UDP and BOOTP requests to the introspection DHCP server that listens on IP address:
192.168.10.1
. -
Forward UDP and BOOTP requests to the provisioning DHCP server that listens on IP address
192.168.10.10
.
interface vlan 2 ip address 192.168.24.254 255.255.255.0 ip helper-address 192.168.10.1 ip helper-address 192.168.10.10 !
Now that you have configured the provisioning network, you can configure the remaining overcloud leaf networks.
2.3. Creating flavors and tagging nodes for leaf networks
Each role in each leaf network requires a flavor and role assignment so that you can tag nodes into their respective leaf. Complete the following steps to create and assign each flavor to a role.
Procedure
Source the
stackrc
file:$ source ~/stackrc
Create flavors for each custom role:
$ ROLES="control compute_leaf0 compute_leaf1 compute_leaf2 ceph-storage_leaf0 ceph-storage_leaf1 ceph-storage_leaf2" $ for ROLE in $ROLES; do openstack flavor create --id auto --ram 4096 --disk 40 --vcpus 1 $ROLE ; done $ for ROLE in $ROLES; do openstack flavor set --property "cpu_arch"="x86_64" --property "capabilities:boot_option"="local" --property "capabilities:profile"="$ROLE" --property resources:CUSTOM_BAREMETAL='1' --property resources:DISK_GB='0' --property resources:MEMORY_MB='0' --property resources:VCPU='0' $ROLE ; done
Tag nodes to their respective leaf networks. For example, run the following command to tag a node with UUID
58c3d07e-24f2-48a7-bbb6-6843f0e8ee13
to the Compute role on Leaf2:$ openstack baremetal node set --property capabilities='profile:compute_leaf2,boot_option:local' 58c3d07e-24f2-48a7-bbb6-6843f0e8ee13
Create an environment file (
~/templates/node-data.yaml
) that contains the mapping of flavors to roles:parameter_defaults: OvercloudControllerFlavor: control OvercloudComputeLeaf0Flavor: compute_leaf0 OvercloudComputeLeaf1Flavor: compute_leaf1 OvercloudComputeLeaf2Flavor: compute_leaf2 OvercloudCephStorageLeaf0Flavor: ceph-storage_leaf0 OvercloudCephStorageLeaf1Flavor: ceph-storage_leaf1 OvercloudCephStorageLeaf2Flavor: ceph-storage_leaf2 ControllerLeaf0Count: 3 ComputeLeaf0Count: 3 ComputeLeaf1Count: 3 ComputeLeaf2Count: 3 CephStorageLeaf0Count: 3 CephStorageLeaf1Count: 3 CephStorageLeaf2Count: 3
You can also set the number of nodes to deploy in the overcloud using each respective
*Count
parameter.
2.4. Mapping bare metal node ports to control plane network segments
To enable deployment on a L3 routed network, you must configure the physical_network
field on the bare metal ports. Each bare metal port is associated with a bare metal node in the OpenStack Bare Metal (ironic) service. The physical network names are the names that you include in the subnets
option in the undercloud configuration.
The physical network name of the subnet specified as local_subnet
in the undercloud.conf
file is always named ctlplane
.
Procedure
Source the
stackrc
file:$ source ~/stackrc
Check the bare metal nodes:
$ openstack baremetal node list
Ensure that the bare metal nodes are either in
enroll
ormanageable
state. If the bare metal node is not in one of these states, the command that sets thephysical_network
property on the baremetal port fails. To set all nodes tomanageable
state, run the following command:$ for node in $(openstack baremetal node list -f value -c Name); do openstack baremetal node manage $node --wait; done
Check which baremetal ports are associated with which baremetal node:
$ openstack baremetal port list --node <node-uuid>
Set the
physical-network
parameter for the ports. In the example below, three subnets are defined in the configuration:leaf0
,leaf1
, andleaf2
. The local_subnet isleaf0
. Because the physical network for thelocal_subnet
is alwaysctlplane
, the baremetal port connected toleaf0
uses ctlplane. The remaining ports use the other leaf names:$ openstack baremetal port set --physical-network ctlplane <port-uuid> $ openstack baremetal port set --physical-network leaf1 <port-uuid> $ openstack baremetal port set --physical-network leaf2 <port-uuid> $ openstack baremetal port set --physical-network leaf2 <port-uuid>
Ensure that the nodes are in available state before deploying the overcloud:
$ openstack overcloud node provide --all-manageable