Este contenido no está disponible en el idioma seleccionado.
Chapter 2. Configuring routed spine-leaf in the undercloud
This section describes a use case about how to configure the undercloud to accommodate routed spine-leaf with composable networks.
2.1. Configuring the spine leaf provisioning networks Copiar enlaceEnlace copiado en el portapapeles!
To configure the provisioning networks for your spine leaf infrastructure, edit the undercloud.conf file and set the relevant parameters included in the following procedure.
Procedure
-
Log in to the undercloud as the
stackuser. If you do not already have an
undercloud.conffile, copy the sample template file:cp /usr/share/python-tripleoclient/undercloud.conf.sample ~/undercloud.conf
[stack@director ~]$ cp /usr/share/python-tripleoclient/undercloud.conf.sample ~/undercloud.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Edit the
undercloud.conffile. Set the following values in the
[DEFAULT]section:Set
local_ipto the undercloud IP onleaf0:local_ip = 192.168.10.1/24
local_ip = 192.168.10.1/24Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set
undercloud_public_hostto the externally facing IP address of the undercloud:undercloud_public_host = 10.1.1.1
undercloud_public_host = 10.1.1.1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set
undercloud_admin_hostto the administration IP address of the undercloud. This IP address is usually on leaf0:undercloud_admin_host = 192.168.10.2
undercloud_admin_host = 192.168.10.2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set
local_interfaceto the interface to bridge for the local network:local_interface = eth1
local_interface = eth1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set
enable_routed_networkstotrue:enable_routed_networks = true
enable_routed_networks = trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Define your list of subnets using the
subnetsparameter. Define one subnet for each L2 segment in the routed spine and leaf:subnets = leaf0,leaf1,leaf2
subnets = leaf0,leaf1,leaf2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the subnet associated with the physical L2 segment local to the undercloud using the
local_subnetparameter:local_subnet = leaf0
local_subnet = leaf0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the value of
undercloud_nameservers.undercloud_nameservers = 10.11.5.19,10.11.5.20
undercloud_nameservers = 10.11.5.19,10.11.5.20Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipYou can find the current IP addresses of the DNS servers that are used for the undercloud nameserver by looking in /etc/resolv.conf.
Create a new section for each subnet that you define in the
subnetsparameter:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save the
undercloud.conffile. Run the undercloud installation command:
openstack undercloud install
[stack@director ~]$ openstack undercloud installCopy to Clipboard Copied! Toggle word wrap Toggle overflow
This configuration creates three subnets on the provisioning network or control plane. The overcloud uses each network to provision systems within each respective leaf.
To ensure proper relay of DHCP requests to the undercloud, you might need to configure a DHCP relay.
2.2. Configuring a DHCP relay Copiar enlaceEnlace copiado en el portapapeles!
You run the DHCP relay service on a switch, router, or server that is connected to the remote network segment you want to forward the requests from.
Do not run the DHCP relay service on the undercloud.
The undercloud uses two DHCP servers on the provisioning network:
- An introspection DHCP server.
- A provisioning DHCP server.
You must configure the DHCP relay to forward DHCP requests to both DHCP servers on the undercloud.
You can use UDP broadcast with devices that support it to relay DHCP requests to the L2 network segment where the undercloud provisioning network is connected. Alternatively, you can use UDP unicast, which relays DHCP requests to specific IP addresses.
Configuration of DHCP relay on specific device types is beyond the scope of this document. As a reference, this document provides a DHCP relay configuration example using the implementation in ISC DHCP software. For more information, see manual page dhcrelay(8).
DHCP option 79 is required for some relays, particularly relays that serve DHCPv6 addresses, and relays that do not pass on the originating MAC address. For more information, see RFC6939.
Broadcast DHCP relay
This method relays DHCP requests using UDP broadcast traffic onto the L2 network segment where the DHCP server or servers reside. All devices on the network segment receive the broadcast traffic. When using UDP broadcast, both DHCP servers on the undercloud receive the relayed DHCP request. Depending on the implementation, you can configure this by specifying either the interface or IP network address:
- Interface
- Specify an interface that is connected to the L2 network segment where the DHCP requests are relayed.
- IP network address
- Specify the network address of the IP network where the DHCP requests are relayed.
Unicast DHCP relay
This method relays DHCP requests using UDP unicast traffic to specific DHCP servers. When you use UDP unicast, you must configure the device that provides the DHCP relay to relay DHCP requests to both the IP address that is assigned to the interface used for introspection on the undercloud and the IP address of the network namespace that the OpenStack Networking (neutron) service creates to host the DHCP service for the ctlplane network.
The interface used for introspection is the one defined as inspection_interface in the undercloud.conf file. If you have not set this parameter, the default interface for the undercloud is br-ctlplane.
It is common to use the br-ctlplane interface for introspection. The IP address that you define as the local_ip in the undercloud.conf file is on the br-ctlplane interface.
The IP address allocated to the Neutron DHCP namespace is the first address available in the IP range that you configure for the local_subnet in the undercloud.conf file. The first address in the IP range is the one that you define as dhcp_start in the configuration. For example, 192.168.10.10 is the IP address if you use the following configuration:
The IP address for the DHCP namespace is automatically allocated. In most cases, this address is the first address in the IP range. To verify that this is the case, run the following commands on the undercloud:
Example dhcrelay configuration
In the following examples, the dhcrelay command in the dhcp package uses the following configuration:
-
Interfaces to relay incoming DHCP request:
eth1,eth2, andeth3. -
Interface the undercloud DHCP servers on the network segment are connected to:
eth0. -
The DHCP server used for introspection is listening on IP address:
192.168.10.1. -
The DHCP server used for provisioning is listening on IP address
192.168.10.10.
This results in the following dhcrelay command:
dhcrelayversion 4.2.x:sudo dhcrelay -d --no-pid 192.168.10.10 192.168.10.1 \ -i eth0 -i eth1 -i eth2 -i eth3
$ sudo dhcrelay -d --no-pid 192.168.10.10 192.168.10.1 \ -i eth0 -i eth1 -i eth2 -i eth3Copy to Clipboard Copied! Toggle word wrap Toggle overflow dhcrelayversion 4.3.x and later:sudo dhcrelay -d --no-pid 192.168.10.10 192.168.10.1 \ -iu eth0 -id eth1 -id eth2 -id eth3
$ sudo dhcrelay -d --no-pid 192.168.10.10 192.168.10.1 \ -iu eth0 -id eth1 -id eth2 -id eth3Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Example Cisco IOS routing switch configuration
This example uses the following Cisco IOS configuration to perform the following tasks:
- Configure a VLAN to use for the provisioning network.
- Add the IP address of the leaf.
-
Forward UDP and BOOTP requests to the introspection DHCP server that listens on IP address:
192.168.10.1. -
Forward UDP and BOOTP requests to the provisioning DHCP server that listens on IP address
192.168.10.10.
interface vlan 2 ip address 192.168.24.254 255.255.255.0 ip helper-address 192.168.10.1 ip helper-address 192.168.10.10 !
interface vlan 2
ip address 192.168.24.254 255.255.255.0
ip helper-address 192.168.10.1
ip helper-address 192.168.10.10
!
Now that you have configured the provisioning network, you can configure the remaining overcloud leaf networks.
2.3. Creating flavors and tagging nodes for leaf networks Copiar enlaceEnlace copiado en el portapapeles!
Each role in each leaf network requires a flavor and role assignment so that you can tag nodes into their respective leaf. Complete the following steps to create and assign each flavor to a role.
Procedure
Source the
stackrcfile:source ~/stackrc
[stack@director ~]$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create flavors for each custom role:
ROLES="control compute_leaf0 compute_leaf1 compute_leaf2 ceph-storage_leaf0 ceph-storage_leaf1 ceph-storage_leaf2" for ROLE in $ROLES; do openstack flavor create --id auto --ram <ram_size_mb> --disk <disk_size_gb> --vcpus <no_vcpus> $ROLE ; done for ROLE in $ROLES; do openstack flavor set --property "cpu_arch"="x86_64" --property "capabilities:boot_option"="local" --property resources:DISK_GB='0' --property resources:MEMORY_MB='0' --property resources:VCPU='0' $ROLE ; done
$ ROLES="control compute_leaf0 compute_leaf1 compute_leaf2 ceph-storage_leaf0 ceph-storage_leaf1 ceph-storage_leaf2" $ for ROLE in $ROLES; do openstack flavor create --id auto --ram <ram_size_mb> --disk <disk_size_gb> --vcpus <no_vcpus> $ROLE ; done $ for ROLE in $ROLES; do openstack flavor set --property "cpu_arch"="x86_64" --property "capabilities:boot_option"="local" --property resources:DISK_GB='0' --property resources:MEMORY_MB='0' --property resources:VCPU='0' $ROLE ; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<ram_size_mb>with the RAM of the bare metal node, in MB. -
Replace
<disk_size_gb>with the size of the disk on the bare metal node, in GB. -
Replace
<no_vcpus>with the number of CPUs on the bare metal node.
-
Replace
Retrieve a list of your nodes to identify their UUIDs:
openstack baremetal node list
(undercloud)$ openstack baremetal node listCopy to Clipboard Copied! Toggle word wrap Toggle overflow Tag each bare metal node to its leaf network and role by using a custom resource class:
openstack baremetal node set \ --resource-class baremetal.LEAF-ROLE <node>
(undercloud)$ openstack baremetal node set \ --resource-class baremetal.LEAF-ROLE <node>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<node>with the ID of the bare metal node.For example, enter the following command to tag a node with UUID
58c3d07e-24f2-48a7-bbb6-6843f0e8ee13to the Compute role on Leaf2:openstack baremetal node set \ --resource-class baremetal.COMPUTE-LEAF2 58c3d07e-24f2-48a7-bbb6-6843f0e8ee13
(undercloud)$ openstack baremetal node set \ --resource-class baremetal.COMPUTE-LEAF2 58c3d07e-24f2-48a7-bbb6-6843f0e8ee13Copy to Clipboard Copied! Toggle word wrap Toggle overflow Associate each leaf network role flavor with the custom resource class:
openstack flavor set \ --property resources:CUSTOM_BAREMETAL_LEAF_ROLE=1 \ <custom_role>
(undercloud)$ openstack flavor set \ --property resources:CUSTOM_BAREMETAL_LEAF_ROLE=1 \ <custom_role>Copy to Clipboard Copied! Toggle word wrap Toggle overflow To determine the name of a custom resource class that corresponds to a resource class of a Bare Metal Provisioning service node, convert the resource class to uppercase, replace each punctuation mark with an underscore, and prefix with
CUSTOM_.NoteA flavor can request only one instance of a bare metal resource class.
In the
node-info.yamlfile, specify the flavor that you want to use for each custom leaf role, and the number of nodes to allocate for each custom leaf role. For example, the following configuration specifies the flavor to use, and the number of nodes to allocate for the custom leaf rolescompute_leaf0,compute_leaf1,compute_leaf2,ceph-storage_leaf0,ceph-storage_leaf1, andceph-storage_leaf2:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.4. Mapping bare metal node ports to control plane network segments Copiar enlaceEnlace copiado en el portapapeles!
To enable deployment on a L3 routed network, you must configure the physical_network field on the bare metal ports. Each bare metal port is associated with a bare metal node in the OpenStack Bare Metal (ironic) service. The physical network names are the names that you include in the subnets option in the undercloud configuration.
The physical network name of the subnet specified as local_subnet in the undercloud.conf file is always named ctlplane.
Procedure
Source the
stackrcfile:source ~/stackrc
$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the bare metal nodes:
openstack baremetal node list
$ openstack baremetal node listCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the bare metal nodes are either in
enrollormanageablestate. If the bare metal node is not in one of these states, the command that sets thephysical_networkproperty on the baremetal port fails. To set all nodes tomanageablestate, run the following command:for node in $(openstack baremetal node list -f value -c Name); do openstack baremetal node manage $node --wait; done
$ for node in $(openstack baremetal node list -f value -c Name); do openstack baremetal node manage $node --wait; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check which baremetal ports are associated with which baremetal node:
openstack baremetal port list --node <node-uuid>
$ openstack baremetal port list --node <node-uuid>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
physical-networkparameter for the ports. In the example below, three subnets are defined in the configuration:leaf0,leaf1, andleaf2. The local_subnet isleaf0. Because the physical network for thelocal_subnetis alwaysctlplane, the baremetal port connected toleaf0uses ctlplane. The remaining ports use the other leaf names:openstack baremetal port set --physical-network ctlplane <port-uuid> openstack baremetal port set --physical-network leaf1 <port-uuid> openstack baremetal port set --physical-network leaf2 <port-uuid>
$ openstack baremetal port set --physical-network ctlplane <port-uuid> $ openstack baremetal port set --physical-network leaf1 <port-uuid> $ openstack baremetal port set --physical-network leaf2 <port-uuid>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Introspect the nodes before you deploy the overcloud. Include the
--all-manageableand--provideoptions to set the nodes as available for deployment:openstack overcloud node introspect --all-manageable --provide
$ openstack overcloud node introspect --all-manageable --provideCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5. Adding a new leaf to a spine-leaf provisioning network Copiar enlaceEnlace copiado en el portapapeles!
When increasing network capacity which can include adding new physical sites, you might need to add a new leaf and a corresponding subnet to your Red Hat OpenStack Platform spine-leaf provisioning network. When provisioning a leaf on the overcloud, the corresponding undercloud leaf is used.
Prerequisites
- Your RHOSP deployment uses a spine-leaf network topology.
Procedure
- Log in to the undercloud host as the stack user.
Source the undercloud credentials file:
source ~/stackrc
$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow In the
/home/stack/undercloud.conffile, do the following:Locate the
subnetsparameter, and add a new subnet for the leaf that you are adding.A subnet represents an L2 segment in the routed spine and leaf:
Example
In this example, a new subnet (
leaf3) is added for the new leaf (leaf3):subnets = leaf0,leaf1,leaf2,leaf3
subnets = leaf0,leaf1,leaf2,leaf3Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a section for the subnet that you added.
Example
In this example, the section
[leaf3]is added for the new subnet (leaf3):Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Save the
undercloud.conffile. Reinstall your undercloud:
openstack undercloud install
$ openstack undercloud installCopy to Clipboard Copied! Toggle word wrap Toggle overflow