Chapter 8. Customizing networks for the Red Hat OpenStack Platform environment
You can customizing the undercloud and overcloud physical networks for your Red Hat OpenStack Platform (RHOSP)environment.
8.1. Customizing undercloud networks
You can customize the undercloud network configuration to install the undercloud with specific networking functionality. You can also configure the undercloud and the provisioning network to use IPv6 instead of IPv4 if you have IPv6 nodes and infrastructure.
8.1.1. Configuring undercloud network interfaces
Include custom network configuration in the undercloud.conf
file to install the undercloud with specific networking functionality. For example, some interfaces might not have DHCP. In this case, you must disable DHCP for these interfaces in the undercloud.conf
file so that os-net-config
can apply the configuration during the undercloud installation process.
Procedure
- Log in to the undercloud host.
Create a new file
undercloud-os-net-config.yaml
and include the network configuration that you require.In the
addresses
section, include thelocal_ip
, such as172.20.0.1/26
. If TLS is enabled in the undercloud, you must also include theundercloud_public_host
, such as172.20.0.2/32
, and theundercloud_admin_host
, such as172.20.0.3/32
.Here is an example:
network_config: - name: br-ctlplane type: ovs_bridge use_dhcp: false dns_servers: - 192.168.122.1 domain: lab.example.com ovs_extra: - "br-set-external-id br-ctlplane bridge-id br-ctlplane" addresses: - ip_netmask: 172.20.0.1/26 - ip_netmask: 172.20.0.2/32 - ip_netmask: 172.20.0.3/32 members: - type: interface name: nic2
To create a network bond for a specific interface, use the following sample:
network_config: - name: br-ctlplane type: ovs_bridge use_dhcp: false dns_servers: - 192.168.122.1 domain: lab.example.com ovs_extra: - "br-set-external-id br-ctlplane bridge-id br-ctlplane" addresses: - ip_netmask: 172.20.0.1/26 - ip_netmask: 172.20.0.2/32 - ip_netmask: 172.20.0.3/32 members: - name: bond-ctlplane type: linux_bond use_dhcp: false bonding_options: "mode=active-backup" mtu: 1500 members: - type: interface name: nic2 - type: interface name: nic3
Include the path to the
undercloud-os-net-config.yaml
file in thenet_config_override
parameter in theundercloud.conf
file:[DEFAULT] ... net_config_override=undercloud-os-net-config.yaml ...
NoteDirector uses the file that you include in the
net_config_override
parameter as the template to generate the/etc/os-net-config/config.yaml
file.os-net-config
manages the interfaces that you define in the template, so you must perform all undercloud network interface customization in this file.- Install the undercloud.
Verification
After the undercloud installation completes successfully, verify that the
/etc/os-net-config/config.yaml
file contains the relevant configuration:network_config: - name: br-ctlplane type: ovs_bridge use_dhcp: false dns_servers: - 192.168.122.1 domain: lab.example.com ovs_extra: - "br-set-external-id br-ctlplane bridge-id br-ctlplane" addresses: - ip_netmask: 172.20.0.1/26 - ip_netmask: 172.20.0.2/32 - ip_netmask: 172.20.0.3/32 members: - type: interface name: nic2
8.1.2. Configuring the undercloud for bare metal provisioning over IPv6
If you have IPv6 nodes and infrastructure, you can configure the undercloud and the provisioning network to use IPv6 instead of IPv4 so that director can provision and deploy Red Hat OpenStack Platform onto IPv6 nodes. However, there are some considerations:
- Dual stack IPv4/6 is not available.
- Tempest validations might not perform correctly.
- IPv4 to IPv6 migration is not available during upgrades.
Modify the undercloud.conf
file to enable IPv6 provisioning in Red Hat OpenStack Platform.
Prerequisites
- An IPv6 address on the undercloud. For more information, see Configuring an IPv6 address on the undercloud in the IPv6 networking for the overcloud guide.
Procedure
-
Open your
undercloud.conf
file. Specify the IPv6 address mode as either stateless or stateful:
[DEFAULT] ipv6_address_mode = <address_mode> ...
-
Replace
<address_mode>
withdhcpv6-stateless
ordhcpv6-stateful
, based on the mode that your NIC supports.
NoteWhen you use the stateful address mode, the firmware, chain loaders, and operating systems might use different algorithms to generate an ID that the DHCP server tracks. DHCPv6 does not track addresses by MAC, and does not provide the same address back if the identifier value from the requester changes but the MAC address remains the same. Therefore, when you use stateful DHCPv6 you must also complete the next step to configure the network interface.
-
Replace
If you configured your undercloud to use stateful DHCPv6, specify the network interface to use for bare metal nodes:
[DEFAULT] ipv6_address_mode = dhcpv6-stateful ironic_enabled_network_interfaces = neutron,flat ...
Set the default network interface for bare metal nodes:
[DEFAULT] ... ironic_default_network_interface = neutron ...
Specify whether or not the undercloud should create a router on the provisioning network:
[DEFAULT] ... enable_routed_networks: <true/false> ...
-
Replace
<true/false>
withtrue
to enable routed networks and prevent the undercloud creating a router on the provisioning network. Whentrue
, the data center router must provide router advertisements. -
Replace
<true/false>
withfalse
to disable routed networks and create a router on the provisioning network.
-
Replace
Configure the local IP address, and the IP address for the director Admin API and Public API endpoints over SSL/TLS:
[DEFAULT] ... local_ip = <ipv6_address> undercloud_admin_host = <ipv6_address> undercloud_public_host = <ipv6_address> ...
-
Replace
<ipv6_address>
with the IPv6 address of the undercloud.
-
Replace
Optional: Configure the provisioning network that director uses to manage instances:
[ctlplane-subnet] cidr = <ipv6_address>/<ipv6_prefix> ...
-
Replace
<ipv6_address>
with the IPv6 address of the network to use for managing instances when not using the default provisioning network. -
Replace
<ipv6_prefix>
with the IP address prefix of the network to use for managing instances when not using the default provisioning network.
-
Replace
Configure the DHCP allocation range for provisioning nodes:
[ctlplane-subnet] cidr = <ipv6_address>/<ipv6_prefix> dhcp_start = <ipv6_address_dhcp_start> dhcp_end = <ipv6_address_dhcp_end> ...
-
Replace
<ipv6_address_dhcp_start>
with the IPv6 address of the start of the network range to use for the overcloud nodes. -
Replace
<ipv6_address_dhcp_end>
with the IPv6 address of the end of the network range to use for the overcloud nodes.
-
Replace
Optional: Configure the gateway for forwarding traffic to the external network:
[ctlplane-subnet] cidr = <ipv6_address>/<ipv6_prefix> dhcp_start = <ipv6_address_dhcp_start> dhcp_end = <ipv6_address_dhcp_end> gateway = <ipv6_gateway_address> ...
-
Replace
<ipv6_gateway_address>
with the IPv6 address of the gateway when not using the default gateway.
-
Replace
Configure the DHCP range to use during the inspection process:
[ctlplane-subnet] cidr = <ipv6_address>/<ipv6_prefix> dhcp_start = <ipv6_address_dhcp_start> dhcp_end = <ipv6_address_dhcp_end> gateway = <ipv6_gateway_address> inspection_iprange = <ipv6_address_inspection_start>,<ipv6_address_inspection_end> ...
-
Replace
<ipv6_address_inspection_start>
with the IPv6 address of the start of the network range to use during the inspection process. -
Replace
<ipv6_address_inspection_end>
with the IPv6 address of the end of the network range to use during the inspection process.
NoteThis range must not overlap with the range defined by
dhcp_start
anddhcp_end
, but must be in the same IP subnet.-
Replace
Configure an IPv6 nameserver for the subnet:
[ctlplane-subnet] cidr = <ipv6_address>/<ipv6_prefix> dhcp_start = <ipv6_address_dhcp_start> dhcp_end = <ipv6_address_dhcp_end> gateway = <ipv6_gateway_address> inspection_iprange = <ipv6_address_inspection_start>,<ipv6_address_inspection_end> dns_nameservers = <ipv6_dns>
-
Replace
<ipv6_dns>
with the DNS nameservers specific to the subnet.
-
Replace
-
Use the
virt-customize
tool to modify the overcloud image to disable thecloud-init
network configuration. For more information, see the Red Hat Knowledgebase solution Modifying the Red Hat Linux OpenStack Platform Overcloud Image with virt-customize.
8.2. Customizing overcloud networks
You can customize the configuration of the physical network for your overcloud. For example, you can create configuration files for the network interface controllers (NICs) by using the NIC template file in Jinja2 ansible format, j2
.
8.2.1. Defining custom network interface templates
You can create a set of custom network interface templates to define the NIC layout for each node in your overcloud environment. The overcloud core template collection contains a set of default NIC layouts for different use cases. You can create a custom NIC template by using a Jinja2 format file with a .j2.yaml
extension. Director converts the Jinja2 files to YAML format during deployment.
You can then set the network_config
property in the overcloud-baremetal-deploy.yaml
node definition file to your custom NIC template to provision the networks for a specific node. For more information, see Provisioning bare metal nodes for the overcloud.
8.2.1.1. Creating a custom NIC template
Create a NIC template to customise the NIC layout for each node in your overcloud environment.
Procedure
Copy the sample network configuration template you require from
/usr/share/ansible/roles/tripleo_network_config/templates/
to your environment file directory:$ cp /usr/share/ansible/roles/tripleo_network_config/templates/<sample_NIC_template> /home/stack/templates/<NIC_template>
-
Replace
<sample_NIC_template>
with the name of the sample NIC template that you want to copy, for example,single_nic_vlans/single_nic_vlans.j2
. -
Replace
<NIC_template>
with the name of your custom NIC template file, for example,single_nic_vlans.j2
.
-
Replace
- Update the network configuration in your custom NIC template to match the requirements for your overcloud network environment. For information about the properties you can use to configure your NIC template, see Network interface configuration options. For an example NIC template, see Example custom network interfaces.
Create or update an existing environment file to enable your custom NIC configuration templates:
parameter_defaults: ControllerNetworkConfigTemplate: '/home/stack/templates/single_nic_vlans.j2' CephStorageNetworkConfigTemplate: '/home/stack/templates/single_nic_vlans_storage.j2' ComputeNetworkConfigTemplate: '/home/stack/templates/single_nic_vlans.j2'
If your overcloud uses the default internal load balancing, add the following configuration to your environment file to assign predictable virtual IPs for Redis and OVNDBs:
parameter_defaults: RedisVirtualFixedIPs: [{'ip_address':'<vip_address>'}] OVNDBsVirtualFixedIPs: [{'ip_address':'<vip_address>'}]
-
Replace
<vip_address>
with an IP address from outside the allocation pool ranges.
-
Replace
8.2.1.2. Network interface configuration options
Use the following tables to understand the available options for configuring network interfaces.
interface
Defines a single network interface. The network interface name
uses either the actual interface name (eth0
, eth1
, enp0s25
) or a set of numbered interfaces (nic1
, nic2
, nic3
). The network interfaces of hosts within a role do not have to be exactly the same when you use numbered interfaces such as nic1
and nic2
, instead of named interfaces such as eth0
and eno2
. For example, one host might have interfaces em1
and em2
, while another has eno1
and eno2
, but you can refer to the NICs of both hosts as nic1
and nic2
.
The order of numbered interfaces corresponds to the order of named network interface types:
-
ethX
interfaces, such aseth0
,eth1
, etc. These are usually onboard interfaces. -
enoX
interfaces, such aseno0
,eno1
, etc. These are usually onboard interfaces. -
enX
interfaces, sorted alpha numerically, such asenp3s0
,enp3s1
,ens3
, etc. These are usually add-on interfaces.
The numbered NIC scheme includes only live interfaces, for example, if the interfaces have a cable attached to the switch. If you have some hosts with four interfaces and some with six interfaces, use nic1
to nic4
and attach only four cables on each host.
- type: interface name: nic2
Option | Default | Description |
---|---|---|
|
Name of the interface. The network interface | |
| False | Use DHCP to get an IP address. |
| False | Use DHCP to get a v6 IP address. |
| A list of IP addresses assigned to the interface. | |
| A list of routes assigned to the interface. For more information, see routes. | |
| 1500 | The maximum transmission unit (MTU) of the connection. |
| False | Defines the interface as the primary interface. |
| False | Write the device alias configuration instead of the system names. |
| None | Arguments that you want to pass to the DHCP client. |
| None | List of DNS servers that you want to use for the interface. |
|
Set this option to |
vlan
Defines a VLAN. Use the VLAN ID and subnet passed from the parameters
section.
For example:
- type: vlan device: nic{{ loop.index + 1 }} mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
Option | Default | Description |
---|---|---|
vlan_id | The VLAN ID. | |
device | The parent device to attach the VLAN. Use this parameter when the VLAN is not a member of an OVS bridge. For example, use this parameter to attach the VLAN to a bonded interface device. | |
use_dhcp | False | Use DHCP to get an IP address. |
use_dhcpv6 | False | Use DHCP to get a v6 IP address. |
addresses | A list of IP addresses assigned to the VLAN. | |
routes | A list of routes assigned to the VLAN. For more information, see routes. | |
mtu | 1500 | The maximum transmission unit (MTU) of the connection. |
primary | False | Defines the VLAN as the primary interface. |
persist_mapping | False | Write the device alias configuration instead of the system names. |
dhclient_args | None | Arguments that you want to pass to the DHCP client. |
dns_servers | None | List of DNS servers that you want to use for the VLAN. |
ovs_bond
Defines a bond in Open vSwitch to join two or more interfaces
together. This helps with redundancy and increases bandwidth.
For example:
members: - type: ovs_bond name: bond1 mtu: {{ min_viable_mtu }} ovs_options: {{ bond_interface_ovs_options }} members: - type: interface name: nic2 mtu: {{ min_viable_mtu }} primary: true - type: interface name: nic3 mtu: {{ min_viable_mtu }}
Option | Default | Description |
---|---|---|
name | Name of the bond. | |
use_dhcp | False | Use DHCP to get an IP address. |
use_dhcpv6 | False | Use DHCP to get a v6 IP address. |
addresses | A list of IP addresses assigned to the bond. | |
routes | A list of routes assigned to the bond. For more information, see routes. | |
mtu | 1500 | The maximum transmission unit (MTU) of the connection. |
primary | False | Defines the interface as the primary interface. |
members | A sequence of interface objects that you want to use in the bond. | |
ovs_options | A set of options to pass to OVS when creating the bond. | |
ovs_extra | A set of options to set as the OVS_EXTRA parameter in the network configuration file of the bond. | |
defroute | True |
Use a default route provided by the DHCP service. Only applies when you enable |
persist_mapping | False | Write the device alias configuration instead of the system names. |
dhclient_args | None | Arguments that you want to pass to the DHCP client. |
dns_servers | None | List of DNS servers that you want to use for the bond. |
ovs_bridge
Defines a bridge in Open vSwitch, which connects multiple interface
, ovs_bond
, and vlan
objects together.
The network interface type, ovs_bridge
, takes a parameter name
.
If you have multiple bridges, you must use distinct bridge names other than accepting the default name of bridge_name
. If you do not use distinct names, then during the converge phase, two network bonds are placed on the same bridge.
If you are defining an OVS bridge for the external tripleo network, then retain the values bridge_name
and interface_name
as your deployment framework automatically replaces these values with an external bridge name and an external interface name, respectively.
For example:
- type: ovs_bridge name: br-bond dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} members: - type: ovs_bond name: bond1 mtu: {{ min_viable_mtu }} ovs_options: {{ bound_interface_ovs_options }} members: - type: interface name: nic2 mtu: {{ min_viable_mtu }} primary: true - type: interface name: nic3 mtu: {{ min_viable_mtu }}
The OVS bridge connects to the Networking service (neutron) server to obtain configuration data. If the OpenStack control traffic, typically the Control Plane and Internal API networks, is placed on an OVS bridge, then connectivity to the neutron server is lost whenever you upgrade OVS, or the OVS bridge is restarted by the admin user or process. This causes some downtime. If downtime is not acceptable in these circumstances, then you must place the Control group networks on a separate interface or bond rather than on an OVS bridge:
- You can achieve a minimal setting when you put the Internal API network on a VLAN on the provisioning interface and the OVS bridge on a second interface.
- To implement bonding, you need at least two bonds (four network interfaces). Place the control group on a Linux bond (Linux bridge). If the switch does not support LACP fallback to a single interface for PXE boot, then this solution requires at least five NICs.
Option | Default | Description |
---|---|---|
name | Name of the bridge. | |
use_dhcp | False | Use DHCP to get an IP address. |
use_dhcpv6 | False | Use DHCP to get a v6 IP address. |
addresses | A list of IP addresses assigned to the bridge. | |
routes | A list of routes assigned to the bridge. For more information, see routes. | |
mtu | 1500 | The maximum transmission unit (MTU) of the connection. |
members | A sequence of interface, VLAN, and bond objects that you want to use in the bridge. | |
ovs_options | A set of options to pass to OVS when creating the bridge. | |
ovs_extra | A set of options to to set as the OVS_EXTRA parameter in the network configuration file of the bridge. | |
defroute | True |
Use a default route provided by the DHCP service. Only applies when you enable |
persist_mapping | False | Write the device alias configuration instead of the system names. |
dhclient_args | None | Arguments that you want to pass to the DHCP client. |
dns_servers | None | List of DNS servers that you want to use for the bridge. |
linux_bond
Defines a Linux bond that joins two or more interfaces
together. This helps with redundancy and increases bandwidth. Ensure that you include the kernel-based bonding options in the bonding_options
parameter.
For example:
- type: linux_bond name: bond1 mtu: {{ min_viable_mtu }} bonding_options: "mode=802.3ad lacp_rate=fast updelay=1000 miimon=100 xmit_hash_policy=layer3+4" members: type: interface name: ens1f0 mtu: {{ min_viable_mtu }} primary: true type: interface name: ens1f1 mtu: {{ min_viable_mtu }}
Option | Default | Description |
---|---|---|
name | Name of the bond. | |
use_dhcp | False | Use DHCP to get an IP address. |
use_dhcpv6 | False | Use DHCP to get a v6 IP address. |
addresses | A list of IP addresses assigned to the bond. | |
routes | A list of routes assigned to the bond. See routes. | |
mtu | 1500 | The maximum transmission unit (MTU) of the connection. |
primary | False | Defines the interface as the primary interface. |
members | A sequence of interface objects that you want to use in the bond. | |
bonding_options | A set of options when creating the bond. | |
defroute | True |
Use a default route provided by the DHCP service. Only applies when you enable |
persist_mapping | False | Write the device alias configuration instead of the system names. |
dhclient_args | None | Arguments that you want to pass to the DHCP client. |
dns_servers | None | List of DNS servers that you want to use for the bond. |
linux_bridge
Defines a Linux bridge, which connects multiple interface
, linux_bond
, and vlan
objects together. The external bridge also uses two special values for parameters:
-
bridge_name
, which is replaced with the external bridge name. -
interface_name
, which is replaced with the external interface.
For example:
- type: linux_bridge name: bridge_name mtu: get_attr: [MinViableMtu, value] use_dhcp: false dns_servers: get_param: DnsServers domain: get_param: DnsSearchDomains addresses: - ip_netmask: list_join: - / - - get_param: ControlPlaneIp - get_param: ControlPlaneSubnetCidr routes: list_concat_unique: - get_param: ControlPlaneStaticRoutes
Option | Default | Description |
---|---|---|
name | Name of the bridge. | |
use_dhcp | False | Use DHCP to get an IP address. |
use_dhcpv6 | False | Use DHCP to get a v6 IP address. |
addresses | A list of IP addresses assigned to the bridge. | |
routes | A list of routes assigned to the bridge. For more information, see routes. | |
mtu | 1500 | The maximum transmission unit (MTU) of the connection. |
members | A sequence of interface, VLAN, and bond objects that you want to use in the bridge. | |
defroute | True |
Use a default route provided by the DHCP service. Only applies when you enable |
persist_mapping | False | Write the device alias configuration instead of the system names. |
dhclient_args | None | Arguments that you want to pass to the DHCP client. |
dns_servers | None | List of DNS servers that you want to use for the bridge. |
routes
Defines a list of routes to apply to a network interface, VLAN, bridge, or bond.
For example:
- type: linux_bridge name: bridge_name ... routes: {{ [ctlplane_host_routes] | flatten | unique }}
Option | Default | Description |
---|---|---|
ip_netmask | None | IP and netmask of the destination network. |
default | False |
Sets this route to a default route. Equivalent to setting |
next_hop | None | The IP address of the router used to reach the destination network. |
8.2.1.3. Example custom network interfaces
The following examples illustrate how to customize network interface templates.
Separate control group and OVS bridge example
The following example Controller node NIC template configures the control group separate from the OVS bridge. The template uses five network interfaces and assigns a number of tagged VLAN devices to the numbered interfaces. The template creates the OVS bridges on nic4
and nic5
.
network_config: - type: interface name: nic1 mtu: {{ ctlplane_mtu }} use_dhcp: false addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_subnet_cidr }} routes: {{ ctlplane_host_routes }} - type: linux_bond name: bond_api mtu: {{ min_viable_mtu_ctlplane }} use_dhcp: false bonding_options: {{ bond_interface_ovs_options }} dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} members: - type: interface name: nic2 mtu: {{ min_viable_mtu_ctlplane }} primary: true - type: interface name: nic3 mtu: {{ min_viable_mtu_ctlplane }} {% for network in role_networks if not network.startswith('Tenant') %} - type: vlan device: bond_api mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %} - type: ovs_bridge name: {{ neutron_physical_bridge_name }} dns_servers: {{ ctlplane_dns_nameservers }} members: - type: linux_bond name: bond-data mtu: {{ min_viable_mtu_dataplane }} bonding_options: {{ bond_interface_ovs_options }} members: - type: interface name: nic4 mtu: {{ min_viable_mtu_dataplane }} primary: true - type: interface name: nic5 mtu: {{ min_viable_mtu_dataplane }} {% for network in role_networks if network.startswith('Tenant') %} - type: vlan device: bond-data mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
Multiple NICs example
The following example uses a second NIC to connect to an infrastructure network with DHCP addresses and another NIC for the bond.
network_config: # Add a DHCP infrastructure network to nic2 - type: interface name: nic2 mtu: {{ tenant_mtu }} use_dhcp: true primary: true - type: vlan mtu: {{ tenant_mtu }} vlan_id: {{ tenant_vlan_id }} addresses: - ip_netmask: {{ tenant_ip }}/{{ tenant_cidr }} routes: {{ [tenant_host_routes] | flatten | unique }} - type: ovs_bridge name: br-bond mtu: {{ external_mtu }} dns_servers: {{ ctlplane_dns_nameservers }} use_dhcp: false members: - type: interface name: nic10 mtu: {{ external_mtu }} use_dhcp: false primary: true - type: vlan mtu: {{ external_mtu }} vlan_id: {{ external_vlan_id }} addresses: - ip_netmask: {{ external_ip }}/{{ external_cidr }} routes: {{ [external_host_routes, [{'default': True, 'next_hop': external_gateway_ip}]] | flatten | unique }}
8.2.1.4. Customizing NIC mappings for pre-provisioned nodes
If you are using pre-provisioned nodes, you can specify the os-net-config
mappings for specific nodes by using one of the following methods:
-
Configure the
NetConfigDataLookup
heat parameter in an environment file, and run theopenstack overcloud node provision
command without--network-config
. -
Configure the
net_config_data_lookup
property in your node definition file,overcloud-baremetal-deploy.yaml
, and run theopenstack overcloud node provision
command with--network-config
.
If you are not using pre-provisioned nodes, you must configure the NIC mappings in your node definition file. For more information on configuring the net_config_data_lookup
property, see Bare-metal node provisioning attributes.
You can assign aliases to the physical interfaces on each node to pre-determine which physical NIC maps to specific aliases, such as nic1
or nic2
, and you can map a MAC address to a specified alias. You can map specific nodes by using the MAC address or DMI keyword, or you can map a group of nodes by using a DMI keyword. The following examples configure three nodes and two node groups with aliases to the physical interfaces. The resulting configuration is applied by os-net-config
. On each node, you can see the applied configuration in the interface_mapping
section of the /etc/os-net-config/mapping.yaml
file.
Example 1: Configuring the NetConfigDataLookup
parameter in os-net-config-mappings.yaml
NetConfigDataLookup: node1: 1 nic1: "00:c8:7c:e6:f0:2e" node2: nic1: "00:18:7d:99:0c:b6" node3: 2 dmiString: "system-uuid" 3 id: 'A8C85861-1B16-4803-8689-AFC62984F8F6' nic1: em3 # Dell PowerEdge nodegroup1: 4 dmiString: "system-product-name" id: "PowerEdge R630" nic1: em3 nic2: em1 nic3: em2 # Cisco UCS B200-M4" nodegroup2: dmiString: "system-product-name" id: "UCSB-B200-M4" nic1: enp7s0 nic2: enp6s0
- 1
- Maps
node1
to the specified MAC address, and assignsnic1
as the alias for the MAC address on this node. - 2
- Maps
node3
to the node with the system UUID "A8C85861-1B16-4803-8689-AFC62984F8F6", and assignsnic1
as the alias forem3
interface on this node. - 3
- The
dmiString
parameter must be set to a valid string keyword. For a list of the valid string keywords, see the DMIDECODE(8) man page. - 4
- Maps all the nodes in
nodegroup1
to nodes with the product name "PowerEdge R630", and assignsnic1
,nic2
, andnic3
as the alias for the named interfaces on these nodes.
Normally, os-net-config
registers only the interfaces that are already connected in an UP
state. However, if you hardcode interfaces with a custom mapping file, the interface is registered even if it is in a DOWN
state.
Example 2: Configuring the net_config_data_lookup
property in overcloud-baremetal-deploy.yaml
- specific nodes
- name: Controller count: 3 defaults: network_config: net_config_data_lookup: node1: nic1: "00:c8:7c:e6:f0:2e" node2: nic1: "00:18:7d:99:0c:b6" node3: dmiString: "system-uuid" id: 'A8C85861-1B16-4803-8689-AFC62984F8F6' nic1: em3 # Dell PowerEdge nodegroup1: dmiString: "system-product-name" id: "PowerEdge R630" nic1: em3 nic2: em1 nic3: em2 # Cisco UCS B200-M4" nodegroup2: dmiString: "system-product-name" id: "UCSB-B200-M4" nic1: enp7s0 nic2: enp6s0
Example 3: Configuring the net_config_data_lookup
property in overcloud-baremetal-deploy.yaml
- all nodes for a role
- name: Controller count: 3 defaults: network_config: template: templates/net_config_bridge.j2 default_route_network: - external instances: - hostname: overcloud-controller-0 network_config: <name/groupname>: nic1: 'XX:XX:XX:XX:XX:XX' nic2: 'YY:YY:YY:YY:YY:YY' nic3: 'ens1f0'
8.2.2. Composable networks
You can create custom composable networks if you want to host specific network traffic on different networks. Director provides a default network topology with network isolation enabled. You can find this configuration in the /usr/share/openstack-tripleo-heat-templates/network-data-samples/default-network-isolation.yaml
.
The overcloud uses the following pre-defined set of network segments by default:
- Internal API
- Storage
- Storage management
- Tenant
- External
You can use composable networks to add networks for various services. For example, if you have a network that is dedicated to NFS traffic, you can present it to multiple roles.
Director supports the creation of custom networks during the deployment and update phases. You can use these additional networks for bare metal nodes, system management, or to create separate networks for different roles. You can also use them to create multiple sets of networks for split deployments where traffic is routed between networks.
8.2.2.1. Adding a composable network
Use composable networks to add networks for various services. For example, if you have a network that is dedicated to storage backup traffic, you can present the network to multiple roles.
Procedure
List the available sample network configuration files:
$ ll /usr/share/openstack-tripleo-heat-templates/network-data-samples/ -rw-r--r--. 1 root root 1554 May 11 23:04 default-network-isolation-ipv6.yaml -rw-r--r--. 1 root root 1181 May 11 23:04 default-network-isolation.yaml -rw-r--r--. 1 root root 1126 May 11 23:04 ganesha-ipv6.yaml -rw-r--r--. 1 root root 1100 May 11 23:04 ganesha.yaml -rw-r--r--. 1 root root 3556 May 11 23:04 legacy-routed-networks-ipv6.yaml -rw-r--r--. 1 root root 2929 May 11 23:04 legacy-routed-networks.yaml -rw-r--r--. 1 root root 383 May 11 23:04 management-ipv6.yaml -rw-r--r--. 1 root root 290 May 11 23:04 management.yaml -rw-r--r--. 1 root root 136 May 11 23:04 no-networks.yaml -rw-r--r--. 1 root root 2725 May 11 23:04 routed-networks-ipv6.yaml -rw-r--r--. 1 root root 2033 May 11 23:04 routed-networks.yaml -rw-r--r--. 1 root root 943 May 11 23:04 vip-data-default-network-isolation.yaml -rw-r--r--. 1 root root 848 May 11 23:04 vip-data-fixed-ip.yaml -rw-r--r--. 1 root root 1050 May 11 23:04 vip-data-routed-networks.yaml
Copy the sample network configuration file you require from
/usr/share/openstack-tripleo-heat-templates/network-data-samples
to your environment file directory:$ cp /usr/share/openstack-tripleo-heat-templates/network-data-samples/default-network-isolation.yaml /home/stack/templates/network_data.yaml
Edit your
network_data.yaml
configuration file and add a section for your new network:- name: StorageBackup vip: false name_lower: storage_backup subnets: storage_backup_subnet: ip_subnet: 172.16.6.0/24 allocation_pools: - start: 172.16.6.4 - end: 172.16.6.250 gateway_ip: 172.16.6.1
Configure any other network attributes for your environment. For more information about the properties you can use to configure network attributes, see Network definition file configuration options.
If you are deploying Red Hat Ceph Storage and using NFS, ensure that you include an isolated StorageNFS network. The following example is present in these files:
-
/usr/share/openstack-tripleo-heat-templates/network-data-samples/ganesha.yaml
/usr/share/openstack-tripleo-heat-templates/network-data-samples/ganesha-ipv6.yaml
Customize these network settings, including the VLAN ID and the subnet ranges. If IPv4 or IPv6 is not necessary, you can omit the corresponding subnet:
Example:
- name: StorageNFS enabled: true vip: true name_lower: storage_nfs subnets: storage_nfs_subnet: vlan: 70 ip_subnet: 172.17.0.0/20 allocation_pools: - start: 172.17.0.4 - end: 172.17.0.250 storage_nfs_ipv6_subnet: ipv6_subnet: fd00:fd00:fd00:7000::/64 ipv6_allocation_pools: - start: fd00:fd00:fd00:7000::4 - end: fd00:fd00:fd00:7000::fffe
- This network will be shared by the overcloud deployment and a Networking service (neutron) provider network that is set up post-overcloud deployment for consumers like the Compute service (nova) VMs to use to mount shares.
- Leave a sizable range outside the allocation pool specified in this example for use in the allocation pool for the subnet definition of the overcloud Networking service StorageNFS provider network.
-
When you add an extra composable network that contains a virtual IP, and want to map some API services to this network, use the
CloudName{network.name}
definition to set the DNS name for the API endpoint:CloudName{{network.name}}
Example:
parameter_defaults: ... CloudNameOcProvisioning: baremetal-vip.example.com
Copy the sample network VIP definition template you require from
/usr/share/openstack-tripleo-heat-templates/network-data-samples
to your environment file directory. The following example copies thevip-data-default-network-isolation.yaml
to a local environment file namedvip_data.yaml
:$ cp /usr/share/openstack-tripleo-heat-templates/network-data-samples/vip-data-default-network-isolation.yaml /home/stack/templates/vip_data.yaml
Edit your
vip_data.yaml
configuration file. The virtual IP data is a list of virtual IP address definitions, each containing the name of the network where the IP address is allocated:- network: storage_mgmt dns_name: overcloud - network: internal_api dns_name: overcloud - network: storage dns_name: overcloud - network: external dns_name: overcloud ip_address: <vip_address> - network: ctlplane dns_name: overcloud - network: storage_nfs dns_name: overcloud ip_address: <vip_address>
-
Replace
<vip_address>
with the required virtual IP address.
For more information about the properties you can use to configure network VIP attributes in your VIP definition file, see Network VIP attribute properties.
-
Replace
Copy a sample network configuration template. Jinja2 templates are used to define NIC configuration templates. Browse the examples provided in the
/usr/share/ansible/roles/tripleo_network_config/templates/
directory, if one of the examples matches your requirements, use it. If the examples do not match your requirements, copy a sample configuration file, and modify it for your needs:$ cp /usr/share/ansible/roles/tripleo_network_config/templates/single_nic_vlans/single_nic_vlans.j2 /home/stack/templates/
Edit your
single_nic_vlans.j2
configuration file:--- {% set mtu_list = [ctlplane_mtu] %} {% for network in role_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_subnet_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic1 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% for network in role_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %}
Set the
network_config
template in theovercloud-baremetal-deploy.yaml
configuration file:- name: CephStorage count: 3 defaults: networks: - network: storage - network: storage_mgmt - network: storage_backup network_config: template: /home/stack/templates/single_nic_vlans.j2
If you are provisioning a StorageNFS network for using a CephFS-NFS back end with the Shared File Systems service (manila), edit the
Controller
orControllerStorageNfs
sections instead of thenetwork_config
section because the StorageNFS network and its VIP are connected to the Controller nodes:- name: ControllerStorageNfs count: 3 hostname_format: controller-%index% instances: - hostname: controller-0 name: controller-0 - hostname: controller-1 name: controller-1 - hostname: controller-2 name: controller-2 defaults: profile: control network_config: template: /home/stack/templates/single_nic_vlans.j2 networks: - network: ctlplane vif: true - network: external - network: internal_api - network: storage - network: storage_mgmt - network: tenant - network: storage_nfs
Provision the overcloud networks. This action generates an output file which will be used an an environment file when deploying the overcloud:
(undercloud)$ openstack overcloud network provision \ --output <deployment_file> /home/stack/templates/<networks_definition_file>.yaml
-
Replace
<networks_definition_file>
with the name of your networks definition file, for example,network_data.yaml
or the name of your StorageNFS network file, for example,network_data_ganesha.yaml
. -
Replace
<deployment_file>
with the name of the heat environment file to generate for inclusion in the deployment command, for example/home/stack/templates/overcloud-networks-deployed.yaml
.
-
Replace
Provision the network VIPs and generate the
vip-deployed-environment.yaml
file. You use this file when you deploy the overcloud:(overcloud)$ openstack overcloud network vip provision --stack <stack> --output <deployment_file> /home/stack/templates/vip_data.yaml
-
Replace
<stack>
with the name of the stack for which the network VIPs are provisioned. If not specified, the default is overcloud. -
Replace
<deployment_file>
with the name of the heat environment file to generate for inclusion in the deployment command, for example/home/stack/templates/overcloud-vip-deployed.yaml
.
-
Replace
8.2.2.2. Including a composable network in a role
You can assign composable networks to the overcloud roles defined in your environment. For example, you might include a custom StorageBackup
network with your Ceph Storage nodes, or you might include a custom StorageNFS
network for using CephFS-NFS with the Shared File Systems service (manila). If you used the ControllerStorageNfs
role that is included by default in director, then a StorageNFS network is already added for you.
Procedure
If you do not already have a custom
roles_data.yaml
file, copy the default to your home directory:$ cp /usr/share/openstack-tripleo-heat-templates/roles_data.yaml /home/stack/templates/roles_data.yaml
-
Edit the custom
roles_data.yaml
file. Include the network name in the
networks
list for the role that you want to add the network to.In this example, you add the
StorageBackup
network to the Ceph Storage role:- name: CephStorage description: | Ceph OSD Storage node role networks: Storage subnet: storage_subnet StorageMgmt subnet: storage_mgmt_subnet StorageBackup subnet: storage_backup_subnet
In this example, you add the
StorageNFS
network to the Controller node:- name: Controller description: | Controller role that has all the controller services loaded, handles Database, Messaging and Network functions, and additionally runs a ganesha service as a CephFS to NFS gateway. The gateway serves NFS exports via a VIP on a new isolated StorageNFS network. # ganesha service should always be deployed in HA configuration. CountDefault: 3 tags: - primary - controller networks: External: subnet: external_subnet InternalApi: subnet: internal_api_subnet Storage: subnet: storage_subnet StorageMgmt: subnet: storage_mgmt_subnet Tenant: subnet: tenant_subnet StorageNFS: subnet: storage_nfs_subnet
- After you add custom networks to their respective roles, save the file.
When you run the openstack overcloud deploy
command, include the custom roles_data.yaml
file using the -r
option. Without the -r
option, the deployment command uses the default set of roles with their respective assigned networks.
8.2.2.3. Assigning OpenStack services to composable networks
Each OpenStack service is assigned to a default network type in the resource registry. These services are bound to IP addresses within the network type’s assigned network. Although the OpenStack services are divided among these networks, the number of actual physical networks can differ as defined in the network environment file. You can reassign OpenStack services to different network types by defining a new network map in an environment file, for example, /home/stack/templates/service-reassignments.yaml
. The ServiceNetMap
parameter determines the network types that you want to use for each service.
For example, you can reassign the Storage Management network services to the Storage Backup Network by modifying the highlighted sections:
parameter_defaults: ServiceNetMap: SwiftStorageNetwork: storage_backup CephClusterNetwork: storage_backup
Changing these parameters to storage_backup
places these services on the Storage Backup network instead of the Storage Management network. This means that you must define a set of parameter_defaults
only for the Storage Backup network and not the Storage Management network.
Director merges your custom ServiceNetMap
parameter definitions into a pre-defined list of defaults that it obtains from ServiceNetMapDefaults
and overrides the defaults. Director returns the full list, including customizations, to ServiceNetMap
, which is used to configure network assignments for various services. For example, GaneshaNetwork
is the default service network for the NFS Gateway for CephFS-NFS. This network defaults to storage_nfs
while falling back to external or ctlplane networks. If you are using a different network instead of the default isolated StorageNFS network, you must update the default network by using a ServiceNetMap
parameter definition.
Example:
parameter_defaults: ServiceNetMap: GaneshaNetwork: <manila_nfs_network>
-
Replace
<manila_nfs_network>
with the name of your custom network.
Service mappings apply to networks that use vip: true
in the network_data.yaml
file for nodes that use Pacemaker. The overcloud load balancer redirects traffic from the VIPs to the specific service endpoints.
You can find a full list of default services in the ServiceNetMapDefaults
parameter in the /usr/share/openstack-tripleo-heat-templates/network/service_net_map.j2.yaml
file.
8.2.2.4. Enabling custom composable networks
Use one of the default NIC templates to enable custom composable networks. In this example, use the Single NIC with VLANs template, (custom_single_nic_vlans
).
Procedure
Source the
stackrc
undercloud credentials file:$ source ~/stackrc
Provision the overcloud networks:
$ openstack overcloud network provision \ --output overcloud-networks-deployed.yaml \ custom_network_data.yaml
Provision the network VIPs:
$ openstack overcloud network vip provision \ --stack overcloud \ --output overcloud-networks-vips-deployed.yaml \ custom_vip_data.yaml
Provision the overcloud nodes:
$ openstack overcloud node provision \ --stack overcloud \ --output overcloud-baremetal-deployed.yaml \ overcloud-baremetal-deploy.yaml
Construct your
openstack overcloud deploy
command, specifying the configuration files and templates in the required order, for example:$ openstack overcloud deploy --templates \ --networks-file network_data_v2.yaml \ -e overcloud-networks-deployed.yaml \ -e overcloud-networks-vips-deployed.yaml \ -e overcloud-baremetal-deployed.yaml -e custom-net-single-nic-with-vlans.yaml
This example command deploys the composable networks, including your additional custom networks, across nodes in your overcloud.
8.2.2.5. Renaming the default networks
You can use the network_data.yaml
file to modify the user-visible names of the default networks:
- InternalApi
- External
- Storage
- StorageMgmt
- Tenant
To change these names, do not modify the name
field. Instead, change the name_lower
field to the new name for the network and update the ServiceNetMap with the new name.
Procedure
In your
network_data.yaml
file, enter new names in thename_lower
parameter for each network that you want to rename:- name: InternalApi name_lower: MyCustomInternalApi
Include the default value of the
name_lower
parameter in theservice_net_map_replace
parameter:- name: InternalApi name_lower: MyCustomInternalApi service_net_map_replace: internal_api
8.2.3. Additional overcloud network configuration
This chapter follows on from the concepts and procedures outlined in Section 8.2.1, “Defining custom network interface templates” and provides some additional information to help configure parts of your overcloud network.
8.2.3.1. Configuring routes and default routes
You can set the default route of a host in one of two ways. If the interface uses DHCP and the DHCP server offers a gateway address, the system uses a default route for that gateway. Otherwise, you can set a default route on an interface with a static IP.
Although the Linux kernel supports multiple default gateways, it uses only the gateway with the lowest metric. If there are multiple DHCP interfaces, this can result in an unpredictable default gateway. In this case, it is recommended to set defroute: false
for interfaces other than the interface that uses the default route.
For example, you might want a DHCP interface (nic3
) to be the default route. Use the following YAML snippet to disable the default route on another DHCP interface (nic2
):
# No default route on this DHCP interface - type: interface name: nic2 use_dhcp: true defroute: false # Instead use this DHCP interface as the default route - type: interface name: nic3 use_dhcp: true
The defroute
parameter applies only to routes obtained through DHCP.
To set a static route on an interface with a static IP, specify a route to the subnet. For example, you can set a route to the 10.1.2.0/24 subnet through the gateway at 172.17.0.1 on the Internal API network:
- type: vlan device: bond1 vlan_id: 9 addresses: - ip_netmask: 172.17.0.100/16 routes: - ip_netmask: 10.1.2.0/24 next_hop: 172.17.0.1
8.2.3.2. Configuring policy-based routing
To configure unlimited access from different networks on Controller nodes, configure policy-based routing. Policy-based routing uses route tables where, on a host with multiple interfaces, you can send traffic through a particular interface depending on the source address. You can route packets that come from different sources to different networks, even if the destinations are the same.
For example, you can configure a route to send traffic to the Internal API network, based on the source address of the packet, even when the default route is for the External network. You can also define specific route rules for each interface.
Red Hat OpenStack Platform uses the os-net-config
tool to configure network properties for your overcloud nodes. The os-net-config
tool manages the following network routing on Controller nodes:
-
Routing tables in the
/etc/iproute2/rt_tables
file -
IPv4 rules in the
/etc/sysconfig/network-scripts/rule-{ifname}
file -
IPv6 rules in the
/etc/sysconfig/network-scripts/rule6-{ifname}
file -
Routing table specific routes in the
/etc/sysconfig/network-scripts/route-{ifname}
Prerequisites
- You have installed the undercloud successfully. For more information, see Installing director on the undercloud in the Installing and managing Red Hat OpenStack Platform with director guide.
Procedure
Create the
interface
entries in a custom NIC template from the/home/stack/templates/custom-nics
directory, define a route for the interface, and define rules that are relevant to your deployment:network_config: - type: interface name: em1 use_dhcp: false addresses: - ip_netmask: {{ external_ip }}/{{ external_cidr}} routes: - default: true next_hop: {{ external_gateway_ip }} - ip_netmask: {{ external_ip }}/{{ external_cidr}} next_hop: {{ external_gateway_ip }} table: 2 route_options: metric 100 rules: - rule: "iif em1 table 200" comment: "Route incoming traffic to em1 with table 200" - rule: "from 192.0.2.0/24 table 200" comment: "Route all traffic from 192.0.2.0/24 with table 200" - rule: "add blackhole from 172.19.40.0/24 table 200" - rule: "add unreachable iif em1 from 192.168.1.0/24"
Include your custom NIC configuration and network environment files in the deployment command, along with any other environment files relevant to your deployment:
$ openstack overcloud deploy --templates \ -e /home/stack/templates/<custom-nic-template> -e <OTHER_ENVIRONMENT_FILES>
Verification
Enter the following commands on a Controller node to verify that the routing configuration is functioning correctly:
$ cat /etc/iproute2/rt_tables $ ip route $ ip rule
8.2.3.3. Configuring jumbo frames
The Maximum Transmission Unit (MTU) setting determines the maximum amount of data transmitted with a single Ethernet frame. Using a larger value results in less overhead because each frame adds data in the form of a header. The default value is 1500 and using a higher value requires the configuration of the switch port to support jumbo frames. Most switches support an MTU of at least 9000, but many are configured for 1500 by default.
The MTU of a VLAN cannot exceed the MTU of the physical interface. Ensure that you include the MTU value on the bond or interface.
The Storage, Storage Management, Internal API, and Tenant networks can all benefit from jumbo frames.
You can alter the value of the mtu
in the jinja2
template or in the network_data.yaml
file. If you set the value in the network_data.yaml
file it is rendered during deployment.
Routers typically cannot forward jumbo frames across Layer 3 boundaries. To avoid connectivity issues, do not change the default MTU for the Provisioning interface, External interface, and any Floating IP interfaces.
--- {% set mtu_list = [ctlplane_mtu] %} {% for network in role_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: bridge_name mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_subnet_cidr }} routes: {{ [ctlplane_host_routes] | flatten | unique }} members: - type: interface name: nic1 mtu: {{ min_viable_mtu }} primary: true - type: vlan mtu: 9000 1 vlan_id: {{ storage_vlan_id }} addresses: - ip_netmask: {{ storage_ip }}/{{ storage_cidr }} routes: {{ [storage_host_routes] | flatten | unique }} - type: vlan mtu: {{ storage_mgmt_mtu }} 2 vlan_id: {{ storage_mgmt_vlan_id }} addresses: - ip_netmask: {{ storage_mgmt_ip }}/{{ storage_mgmt_cidr }} routes: {{ [storage_mgmt_host_routes] | flatten | unique }} - type: vlan mtu: {{ internal_api_mtu }} vlan_id: {{ internal_api_vlan_id }} addresses: - ip_netmask: {{ internal_api_ip }}/{{ internal_api_cidr }} routes: {{ [internal_api_host_routes] | flatten | unique }} - type: vlan mtu: {{ tenant_mtu }} vlan_id: {{ tenant_vlan_id }} addresses: - ip_netmask: {{ tenant_ip }}/{{ tenant_cidr }} routes: {{ [tenant_host_routes] | flatten | unique }} - type: vlan mtu: {{ external_mtu }} vlan_id: {{ external_vlan_id }} addresses: - ip_netmask: {{ external_ip }}/{{ external_cidr }} routes: {{ [external_host_routes, [{'default': True, 'next_hop': external_gateway_ip}]] | flatten | unique }}
8.2.3.4. Configuring ML2/OVN northbound path MTU discovery for jumbo frame fragmentation
If a VM on your internal network sends jumbo frames to an external network, and the maximum transmission unit (MTU) of the internal network exceeds the MTU of the external network, a northbound frame can easily exceed the capacity of the external network.
ML2/OVS automatically handles this oversized packet issue, and ML2/OVN handles it automatically for TCP packets.
But to ensure proper handling of oversized northbound UDP packets in a deployment that uses the ML2/OVN mechanism driver, you need to perform additional configuration steps.
These steps configure ML2/OVN routers to return ICMP "fragmentation needed" packets to the sending VM, where the sending application can break the payload into smaller packets.
In east/west traffic, a RHOSP ML2/OVN deployment does not support fragmentation of packets that are larger than the smallest MTU on the east/west path. For example:
- VM1 is on Network1 with an MTU of 1300.
- VM2 is on Network2 with an MTU of 1200.
A ping in either direction between VM1 and VM2 with a size of 1171 or less succeeds. A ping with a size greater than 1171 results in 100 percent packet loss.
With no identified customer requirements for this type of fragmentation, Red Hat has no plans to add support.
Procedure
Set the following value in the [ovn] section of ml2_conf.ini:
ovn_emit_need_to_frag = True
8.2.3.5. Configuring the native VLAN on a trunked interface
If a trunked interface or bond has a network on the native VLAN, the IP addresses are assigned directly to the bridge and there is no VLAN interface.
The following example configures a bonded interface where the External network is on the native VLAN:
network_config: - type: ovs_bridge name: br-ex addresses: - ip_netmask: {{ external_ip }}/{{ external_cidr }} routes: {{ external_host_routes }} members: - type: ovs_bond name: bond1 ovs_options: {{ bond_interface_ovs_options }} members: - type: interface name: nic3 primary: true - type: interface name: nic4
When you move the address or route statements onto the bridge, remove the corresponding VLAN interface from the bridge. Make the changes to all applicable roles. The External network is only on the controllers, so only the controller template requires a change. The Storage network is attached to all roles, so if the Storage network is on the default VLAN, all roles require modifications.
8.2.3.6. Increasing the maximum number of connections that netfilter tracks
The Red Hat OpenStack Platform (RHOSP) Networking service (neutron) uses netfilter connection tracking to build stateful firewalls and to provide network address translation (NAT) on virtual networks. There are some situations that can cause the kernel space to reach the maximum connection limit and result in errors such as nf_conntrack: table full, dropping packet.
You can increase the limit for connection tracking (conntrack) and avoid these types of errors. You can increase the conntrack limit for one or more roles, or across all the nodes, in your RHOSP deployment.
Prerequisites
- A successful RHOSP undercloud installation.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the undercloud credentials file:
$ source ~/stackrc
Create a custom YAML environment file.
Example
$ vi /home/stack/templates/custom-environment.yaml
Your environment file must contain the keywords
parameter_defaults
andExtraSysctlSettings
. Enter a new value for the maximum number of connections that netfilter can track in the variable,net.nf_conntrack_max
.Example
In this example, you can set the conntrack limit across all hosts in your RHOSP deployment:
parameter_defaults: ExtraSysctlSettings: net.nf_conntrack_max: value: 500000
Use the
<role>Parameter
parameter to set the conntrack limit for a specific role:parameter_defaults: <role>Parameters: ExtraSysctlSettings: net.nf_conntrack_max: value: <simultaneous_connections>
Replace
<role>
with the name of the role.For example, use
ControllerParameters
to set the conntrack limit for the Controller role, orComputeParameters
to set the conntrack limit for the Compute role.Replace
<simultaneous_connections>
with the quantity of simultaneous connections that you want to allow.Example
In this example, you can set the conntrack limit for only the Controller role in your RHOSP deployment:
parameter_defaults: ControllerParameters: ExtraSysctlSettings: net.nf_conntrack_max: value: 500000
NoteThe default value for
net.nf_conntrack_max
is500000
connections. The maximum value is:4294967295
.
Run the deployment command and include the core heat templates, environment files, and this new custom environment file.
ImportantThe order of the environment files is important as the parameters and resources defined in subsequent environment files take precedence.
Example
$ openstack overcloud deploy --templates \ -e /home/stack/templates/custom-environment.yaml
Additional resources
8.2.4. Network interface bonding
You can use various bonding options in your custom network configuration.
8.2.4.1. Network interface bonding for overcloud nodes
You can bundle multiple physical NICs together to form a single logical channel known as a bond. You can configure bonds to provide redundancy for high availability systems or increased throughput.
Red Hat OpenStack Platform supports Open vSwitch (OVS) kernel bonds, OVS-DPDK bonds, and Linux kernel bonds.
Bond type | Type value | Allowed bridge types | Allowed members |
---|---|---|---|
OVS kernel bonds |
|
|
|
OVS-DPDK bonds |
|
|
|
Linux kernel bonds |
|
|
|
Do not combine ovs_bridge
and ovs_user_bridge
on the same node.
8.2.4.2. Creating Open vSwitch (OVS) bonds
You create OVS bonds in your network interface templates. For example, you can create a bond as part of an OVS user space bridge:
- type: ovs_user_bridge name: br-dpdk0 members: - type: ovs_dpdk_bond name: dpdkbond0 rx_queue: {{ num_dpdk_interface_rx_queues }} members: - type: ovs_dpdk_port name: dpdk0 members: - type: interface name: nic4 - type: ovs_dpdk_port name: dpdk1 members: - type: interface name: nic5
In this example, you create the bond from two DPDK ports.
The ovs_options
parameter contains the bonding options. You can configure a bonding options in a network environment file with the BondInterfaceOvsOptions
parameter:
environment_parameters: BondInterfaceOvsOptions: "bond_mode=active_backup"
8.2.4.3. Open vSwitch (OVS) bonding options
You can set various Open vSwitch (OVS) bonding options with the ovs_options
heat parameter in your NIC template files.
bond_mode=balance-slb
-
Source load balancing (slb) balances flows based on source MAC address and output VLAN, with periodic rebalancing as traffic patterns change. When you configure a bond with the
balance-slb
bonding option, there is no configuration required on the remote switch. The Networking service (neutron) assigns each source MAC and VLAN pair to a link and transmits all packets from that MAC and VLAN through that link. A simple hashing algorithm based on source MAC address and VLAN number is used, with periodic rebalancing as traffic patterns change. Thebalance-slb
mode is similar to mode 2 bonds used by the Linux bonding driver. You can use this mode to provide load balancing even when the switch is not configured to use LACP. bond_mode=active-backup
-
When you configure a bond using
active-backup
bond mode, the Networking service keeps one NIC in standby. The standby NIC resumes network operations when the active connection fails. Only one MAC address is presented to the physical switch. This mode does not require switch configuration, and works when the links are connected to separate switches. This mode does not provide load balancing. lacp=[active | passive | off]
-
Controls the Link Aggregation Control Protocol (LACP) behavior. Only certain switches support LACP. If your switch does not support LACP, use
bond_mode=balance-slb
orbond_mode=active-backup
. other-config:lacp-fallback-ab=true
- Set active-backup as the bond mode if LACP fails.
other_config:lacp-time=[fast | slow]
- Set the LACP heartbeat to one second (fast) or 30 seconds (slow). The default is slow.
other_config:bond-detect-mode=[miimon | carrier]
- Set the link detection to use miimon heartbeats (miimon) or monitor carrier (carrier). The default is carrier.
other_config:bond-miimon-interval=100
- If using miimon, set the heartbeat interval (milliseconds).
bond_updelay=1000
- Set the interval (milliseconds) that a link must be up to be activated to prevent flapping.
other_config:bond-rebalance-interval=10000
- Set the interval (milliseconds) that flows are rebalancing between bond members. Set this value to zero to disable flow rebalancing between bond members.
8.2.4.4. Using Link Aggregation Control Protocol (LACP) with Open vSwitch (OVS) bonding modes
You can use bonds with the optional Link Aggregation Control Protocol (LACP). LACP is a negotiation protocol that creates a dynamic bond for load balancing and fault tolerance.
Use the following table to understand support compatibility for OVS kernel and OVS-DPDK bonded interfaces in conjunction with LACP options.
On control and storage networks, Red Hat recommends that you use Linux bonds with VLAN and LACP, because OVS bonds carry the potential for control plane disruption that can occur when OVS or the neutron agent is restarted for updates, hot fixes, and other events. The Linux bond/LACP/VLAN configuration provides NIC management without the OVS disruption potential.
Objective | OVS bond mode | Compatible LACP options | Notes |
High availability (active-passive) |
|
| |
Increased throughput (active-active) |
|
|
|
|
|
|
8.2.4.5. Creating Linux bonds
You create Linux bonds in your network interface templates. For example, you can create a Linux bond that bonds two interfaces:
- type: linux_bond name: bond_api mtu: {{ min_viable_mtu_ctlplane }} use_dhcp: false bonding_options: {{ bond_interface_ovs_options }} dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} members: - type: interface name: nic2 mtu: {{ min_viable_mtu_ctlplane }} primary: true - type: interface name: nic3 mtu: {{ min_viable_mtu_ctlplane }}
The bonding_options
parameter sets the specific bonding options for the Linux bond.
mode
-
Sets the bonding mode, which in the example is
802.3ad
or LACP mode. For more information about Linux bonding modes, see "Upstream Switch Configuration Depending on the Bonding Modes" in the Red Hat Enterprise Linux 9 Configuring and Managing Networking guide. lacp_rate
- Defines whether LACP packets are sent every 1 second, or every 30 seconds.
updelay
- Defines the minimum amount of time that an interface must be active before it is used for traffic. This minimum configuration helps to mitigate port flapping outages.
miimon
- The interval in milliseconds that is used for monitoring the port state using the MIIMON functionality of the driver.
Use the following additional examples as guides to configure your own Linux bonds:
Linux bond set to
active-backup
mode with one VLAN:.... - type: linux_bond name: bond_api mtu: {{ min_viable_mtu_ctlplane }} use_dhcp: false bonding_options: "mode=active-backup" dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} members: - type: interface name: nic2 mtu: {{ min_viable_mtu_ctlplane }} primary: true - type: interface name: nic3 mtu: {{ min_viable_mtu_ctlplane }} - type: vlan mtu: {{ internal_api_mtu }} vlan_id: {{ internal_api_vlan_id }} addresses: - ip_netmask: {{ internal_api_ip }}/{{ internal_api_cidr }} routes: {{ internal_api_host_routes }}
Linux bond on OVS bridge. Bond set to
802.3ad
LACP mode with one VLAN:- type: linux_bond name: bond_tenant mtu: {{ min_viable_mtu_ctlplane }} bonding_options: "mode=802.3ad updelay=1000 miimon=100" use_dhcp: false dns_servers: {{ ctlplane_dns_nameserver }} domain: {{ dns_search_domains }} members: - type: interface name: p1p1 mtu: {{ min_viable_mtu_ctlplane }} - type: interface name: p1p2 mtu: {{ min_viable_mtu_ctlplane }} - type: vlan mtu: {{ tenant_mtu }} vlan_id: {{ tenant_vlan_id }} addresses: - ip_netmask: {{ tenant_ip }}/{{ tenant_cidr }} routes: {{ tenant_host_routes }}
ImportantYou must set up
min_viable_mtu_ctlplane
before you can use it. Copy/usr/share/ansible/roles/tripleo_network_config/templates/2_linux_bonds_vlans.j2
to your templates directory and modify it for your needs. For more information, see Composable networks, and refer to the steps that pertain to the network configuration template.
8.2.5. Updating the format of your network configuration files
The format of the network configuration yaml
files has changed in Red Hat OpenStack Platform (RHOSP) 17.0. The structure of the network configuration file network_data.yaml
has changed, and the NIC template file format has changed from yaml
file format to Jinja2 ansible format, j2
.
You can convert your existing network configuration file in your current deployment to the RHOSP 17+ format by using the following conversion tools:
-
convert_v1_net_data.py
-
convert_heat_nic_config_to_ansible_j2.py
You can also manually convert your existing NIC template files.
The files you need to convert include the following:
-
network_data.yaml
- Controller NIC templates
- Compute NIC templates
- Any other custom network files
8.2.5.1. Updating the format of your network configuration file
The format of the network configuration yaml
file has changed in Red Hat OpenStack Platform (RHOSP) 17.0. You can convert your existing network configuration file in your current deployment to the RHOSP 17+ format by using the convert_v1_net_data.py
conversion tool.
Procedure
Download the conversion tool:
-
/usr/share/openstack-tripleo-heat-templates/tools/convert_v1_net_data.py
-
Convert your RHOSP 16+ network configuration file to the RHOSP 17+ format:
$ python3 convert_v1_net_data.py <network_config>.yaml
-
Replace
<network_config>
with the name of the existing configuration file that you want to convert, for example,network_data.yaml
.
-
Replace
8.2.5.2. Automatically converting NIC templates to Jinja2 Ansible format
The NIC template file format has changed from yaml
file format to Jinja2 Ansible format, j2
, in Red Hat OpenStack Platform (RHOSP) 17.0.
You can convert your existing NIC template files in your current deployment to the Jinja2 format by using the convert_heat_nic_config_to_ansible_j2.py
conversion tool.
You can also manually convert your existing NIC template files. For more information, see Manually converting NIC templates to Jinja2 Ansible format.
The files you need to convert include the following:
- Controller NIC templates
- Compute NIC templates
- Any other custom network files
Procedure
-
Log in to the undercloud as the
stack
user. Source the
stackrc
file:[stack@director ~]$ source ~/stackrc
Copy the conversion tool to your current directory on the undercloud:
$ cp /usr/share/openstack-tripleo-heat-templates/tools/convert_heat_nic_config_to_ansible_j2.py .
Convert your Compute and Controller NIC template files, and any other custom network files, to the Jinja2 Ansible format:
$ python3 convert_heat_nic_config_to_ansible_j2.py \ [--stack <overcloud> | --standalone] --networks_file <network_config.yaml> \ <network_template>.yaml
Replace
<overcloud>
with the name or UUID of the overcloud stack. If--stack
is not specified, the stack defaults toovercloud
.NoteYou can use the
--stack
option only on your RHOSP 16 deployment because it requires the Orchestration service (heat) to be running on the undercloud node. Starting with RHOSP 17, RHOSP deployments use ephemeral heat, which runs the Orchestration service in a container. If the Orchestration service is not available, or you have no stack, then use the--standalone
option instead of--stack
.-
Replace
<network_config.yaml>
with the name of the configuration file that describes the network deployment, for example,network_data.yaml
. -
Replace
<network_template>
with the name of the network configuration file you want to convert.
Repeat this command until you have converted all your custom network configuration files. The
convert_heat_nic_config_to_ansible_j2.py
script generates a.j2
file for eachyaml
file you pass to it for conversion.-
Inspect each generated
.j2
file to ensure the configuration is correct and complete for your environment, and manually address any comments generated by the tool that highlight where the configuration could not be converted. For more information about manually converting the NIC configuration to Jinja2 format, see Heat parameter to Ansible variable mappings. Configure the
*NetworkConfigTemplate
parameters in yournetwork-environment.yaml
file to point to the generated.j2
files:parameter_defaults: ControllerNetworkConfigTemplate: '/home/stack/templates/custom-nics/controller.j2' ComputeNetworkConfigTemplate: '/home/stack/templates/custom-nics/compute.j2'
Delete the
resource_registry
mappings from yournetwork-environment.yaml
file for the old network configuration files:resource_registry: OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/templates/nic-configs/compute.yaml OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/templates/nic-configs/controller.yaml
8.2.5.3. Manually converting NIC templates to Jinja2 Ansible format
The NIC template file format has changed from yaml
file format to Jinja2 Ansible format, j2
, in Red Hat OpenStack Platform (RHOSP) 17.0.
You can manually convert your existing NIC template files.
You can also convert your existing NIC template files in your current deployment to the Jinja2 format by using the convert_heat_nic_config_to_ansible_j2.py
conversion tool. For more information, see Automatically converting NIC templates to Jinja2 ansible format.
The files you need to convert include the following:
- Controller NIC templates
- Compute NIC templates
- Any other custom network files
Procedure
-
Create a Jinja2 template. You can create a new template by copying an example template from the
/usr/share/ansible/roles/tripleo_network_config/templates/
directory on the undercloud node. Replace the heat intrinsic functions with Jinja2 filters. For example, use the following filter to calculate the
min_viable_mtu
:{% set mtu_list = [ctlplane_mtu] %} {% for network in role_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %}
Use Ansible variables to configure the network properties for your deployment. You can configure each individual network manually, or programatically configure each network by iterating over
role_networks
:To manually configure each network, replace each
get_param
function with the equivalent Ansible variable. For example, if your current deployment configuresvlan_id
by usingget_param: InternalApiNetworkVlanID
, then add the following configuration to your template:vlan_id: {{ internal_api_vlan_id }}
Table 8.9. Example network property mapping from heat parameters to Ansible vars yaml
file formatJinja2 ansible format, j2
- type: vlan device: nic2 vlan_id: get_param: InternalApiNetworkVlanID addresses: - ip_netmask: get_param: InternalApiIpSubnet
- type: vlan device: nic2 vlan_id: {{ internal_api_vlan_id }} addresses: - ip_netmask: {{ internal_api_ip }}/{{ internal_api_cidr }}
To programatically configure each network, add a Jinja2 for-loop structure to your template that retrieves the available networks by their role name by using
role_networks
.Example
{% for network in role_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {%- endfor %}
For a full list of the mappings from the heat parameter to the Ansible
vars
equivalent, see Heat parameter to Ansible variable mappings.Configure the
*NetworkConfigTemplate
parameters in yournetwork-environment.yaml
file to point to the generated.j2
files:parameter_defaults: ControllerNetworkConfigTemplate: '/home/stack/templates/custom-nics/controller.j2' ComputeNetworkConfigTemplate: '/home/stack/templates/custom-nics/compute.j2'
Delete the
resource_registry
mappings from yournetwork-environment.yaml
file for the old network configuration files:resource_registry: OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/templates/nic-configs/compute.yaml OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/templates/nic-configs/controller.yaml
8.2.5.4. Heat parameter to Ansible variable mappings
The NIC template file format has changed from yaml
file format to Jinja2 ansible format, j2
, in Red Hat OpenStack Platform (RHOSP) 17.x.
To manually convert your existing NIC template files to Jinja2 ansible format, you can map your heat parameters to Ansible variables to configure the network properties for pre-provisioned nodes in your deployment. You can also map your heat parameters to Ansible variables if you run openstack overcloud node provision
without specifying the --network-config
optional argument.
For example, if your current deployment configures vlan_id
by using get_param: InternalApiNetworkVlanID
, then replace it with the following configuration in your new Jinja2 template:
vlan_id: {{ internal_api_vlan_id }}
If you provision your nodes by running openstack overcloud node provision
with the --network-config
optional argument, you must configure the network properties for your deploying by using the parameters in overcloud-baremetal-deploy.yaml
. For more information, see Heat parameter to provisioning definition file mappings.
The following table lists the available mappings from the heat parameter to the Ansible vars
equivalent.
Heat parameter | Ansible vars |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Note
This Ansible variable is populated with the IP address configured in |
|
|
Configuring a heat parameter that is not listed in the table
To configure a heat parameter that is not listed in the table, you must configure the parameter as a {{role.name}}ExtraGroupVars
. After you have configured the parameter as a {{role.name}}ExtraGroupVars
parameter, you can then use it in your new template. For example, to configure the StorageSupernet
parameter, add the following configuration to your network configuration file:
parameter_defaults: ControllerExtraGroupVars: storage_supernet: 172.16.0.0/16
You can then add {{ storage_supernet }}
to your Jinja2 template.
This process will not work if the --network-config
option is used with node provisioning. Users requiring custom vars should not use the --network-config
option. Instead, after creating the Heat stack, apply the node network configuration to the config-download
ansible run.
Converting the Ansible variable syntax to programmatically configure each network
When you use a Jinja2 for-loop structure to retrieve the available networks by their role name by iterating over role_networks
, you need to retrieve the lower case name for each network role to prepend to each property. Use the following structure to convert the Ansible vars
from the above table to the required syntax:
{{ lookup(‘vars’, networks_lower[network] ~ ‘_<property>’) }}
-
Replace
<property>
with the property that you are setting, for example,ip
,vlan_id
, ormtu
.
For example, to populate the value for each NetworkVlanID
dynamically, replace {{ <network_name>_vlan_id }}
with the following configuration:
{{ lookup(‘vars’, networks_lower[network] ~ ‘_vlan_id’) }}`
8.2.5.5. Heat parameter to provisioning definition file mappings
If you provision your nodes by running the openstack overcloud node provision
command with the --network-config
optional argument, you must configure the network properties for your deployment by using the parameters in the node definition file overcloud-baremetal-deploy.yaml
.
If your deployment uses pre-provisioned nodes, you can map your heat parameters to Ansible variables to configure the network properties. You can also map your heat parameters to Ansible variables if you run openstack overcloud node provision
without specifying the --network-config
optional argument. For more information about configuring network properties by using Ansible variables, see Heat parameter to Ansible variable mappings.
The following table lists the available mappings from the heat parameter to the network_config
property equivalent in the node definition file overcloud-baremetal-deploy.yaml
.
Heat parameter | network_config property |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The following table lists the available mappings from the heat parameter to the property equivalent in the networks definition file network_data.yaml
.
Heat parameter | IPv4 network_data.yaml property | IPv6 network_data.yaml property |
---|---|---|
|
- name: <network_name> subnets: subnet01: ip_subnet: 172.16.1.0/24 |
- name: <network_name> subnets: subnet01: ipv6_subnet: 2001:db8:a::/64 |
|
- name: <network_name> subnets: subnet01: ... vlan: <vlan_id> |
- name: <network_name> subnets: subnet01: ... vlan: <vlan_id> |
|
- name: <network_name> mtu: |
- name: <network_name> mtu: |
|
- name: <network_name> subnets: subnet01: ip_subnet: 172.16.16.0/24 gateway_ip: 172.16.16.1 |
- name: <network_name> subnets: subnet01: ipv6_subnet: 2001:db8:a::/64 gateway_ipv6: 2001:db8:a::1 |
|
- name: <network_name> subnets: subnet01: ... routes: - destination: 172.18.0.0/24 nexthop: 172.18.1.254 |
- name: <network_name> subnets: subnet01: ... routes_ipv6: - destination: 2001:db8:b::/64 nexthop: 2001:db8:a::1 |
8.2.5.6. Changes to the network data schema
The network data schema was updated in Red Hat OpenStack Platform (RHOSP) 17. The main differences between the network data schema used in RHOSP 16 and earlier, and network data schema used in RHOSP 17 and later, are as follows:
-
The base subnet has been moved to the
subnets
map. This aligns the configuration for non-routed and routed deployments, such as spine-leaf networking. -
The
enabled
option is no longer used to ignore disabled networks. Instead, you must remove disabled networks from the configuration file. -
The
compat_name
option is no longer required as the heat resource that used it has been removed. -
The following parameters are no longer valid at the network level:
ip_subnet
,gateway_ip
,allocation_pools
,routes
,ipv6_subnet
,gateway_ipv6
,ipv6_allocation_pools
, androutes_ipv6
. These parameters are still used at the subnet level. -
A new parameter,
physical_network
, has been introduced, that is used to create ironic ports inmetalsmith
. -
New parameters
network_type
andsegmentation_id
replace{{network.name}}NetValueSpecs
used to set the network type tovlan
. The following parameters have been deprecated in RHOSP 17:
-
{{network.name}}NetCidr
-
{{network.name}}SubnetName
-
{{network.name}}Network
-
{{network.name}}AllocationPools
-
{{network.name}}Routes
-
{{network.name}}SubnetCidr_{{subnet}}
-
{{network.name}}AllocationPools_{{subnet}}
-
{{network.name}}Routes_{{subnet}}
-