Chapter 4. Customizing data plane networks


In a Red Hat OpenStack Services on OpenShift (RHOSO) environment, the network configuration applied by default to the data plane nodes is the single NIC VLANs configuration. However, you can modify the network configuration that the OpenStack Operator applies.

You can customize the network configuration for each data plane node set in your Red Hat OpenStack Services on OpenShift (RHOSO) environment.

Prerequisites

  • You have the oc command line tool installed on your workstation.
  • You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.

Procedure

  1. Open the OpenStackDataPlaneNodeSet CR definition file for the node set you want to update, for example, my_data_plane_node_set.yaml.
  2. Add the required network configuration or modify the existing configuration. Place the configuration in the edpm_network_config_template under ansibleVars:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: my-data-plane-node-set
    spec:
      ...
      nodeTemplate:
        ...
        ansible:
          ansibleVars:
            edpm_network_config_template: |
              ---
              Network configuration options here
              ...

    When modifying your network configuration, refer to Section 4.2, “Network interface configuration options”.

  3. Save the OpenStackDataPlaneNodeSet CR definition file.
  4. Apply the updated OpenStackDataPlaneNodeSet CR configuration:

    $ oc apply -f my_data_plane_node_set.yaml
  5. Verify that the data plane resource has been updated:

    $ oc get openstackdataplanenodeset
    Sample output
    NAME                     STATUS MESSAGE
    my-data-plane-node-set   False  Deployment not started
  6. Create a file on your workstation to define the OpenStackDataPlaneDeployment CR, for example, my_data_plane_deploy.yaml:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: my-data-plane-deploy
    Tip

    Give the definition file and the OpenStackDataPlaneDeployment CR a unique and descriptive name that indicates the purpose of the modified node set.

  7. Add the OpenStackDataPlaneNodeSet CR that you modified:

    spec:
      nodeSets:
        - my-data-plane-node-set
  8. Save the OpenStackDataPlaneDeployment CR deployment file.
  9. Deploy the modified OpenStackDataPlaneNodeSet CR:

    $ oc create -f my_data_plane_deploy.yaml -n openstack

    You can view the Ansible logs while the deployment executes:

    $ oc get pod -l app=openstackansibleee -n openstack -w
    
    $ oc logs -l app=openstackansibleee -n openstack -f \
    --max-log-requests 10
  10. Verify that the modified OpenStackDataPlaneNodeSet CR is deployed:

    Example
    $ oc get openstackdataplanedeployment -n openstack
    Sample output
    NAME                     STATUS   MESSAGE
    my-data-plane-node-set   True     Setup Complete
  11. Repeat the oc get command until you see the NodeSet Ready message:

    Example
    $ oc get openstackdataplanenodeset -n openstack
    Sample output
    NAME                     STATUS   MESSAGE
    my-data-plane-node-set   True     NodeSet Ready

    For information on the meaning of the returned status, see Data plane conditions and states in the Deploying Red Hat OpenStack Services on OpenShift guide.

4.2. Network interface configuration options

Use the following tables to understand the available options for configuring network interfaces for Red Hat OpenStack Services on OpenShift (RHOSO) environments.

Note

Linux bridges are not supported in RHOSO. Instead, use methods such as Linux bonds and dedicated NICs for RHOSO traffic.

4.2.1. interface

Defines a single network interface. The network interface name uses either the actual interface name (eth0, eth1, enp0s25) or a set of numbered interfaces (nic1, nic2, nic3). The network interfaces of hosts within a role do not have to be exactly the same when you use numbered interfaces such as nic1 and nic2, instead of named interfaces such as eth0 and eno2. For example, one host might have interfaces em1 and em2, while another has eno1 and eno2, but you can refer to the NICs of both hosts as nic1 and nic2.

The order of numbered interfaces corresponds to the order of named network interface types:

  • ethX interfaces, such as eth0, eth1, and so on.

    Names appear in this format when consistent device naming is turned off in udev.

  • enoX and emX interfaces, such as eno0, eno1, em0, em1, and so on.

    These are usually on-board interfaces.

  • enX and any other interfaces, sorted alpha numerically, such as enp3s0, enp3s1, ens3, and so on.

    These are usually add-on interfaces.

The numbered NIC scheme includes only live interfaces, for example, if the interfaces have a cable attached to the switch. If you have some hosts with four interfaces and some with six interfaces, use nic1 to nic4 and attach only four cables on each host.

Expand
Table 4.1. interface options
OptionDefaultDescription

name

 

Name of the interface. The network interface name uses either the actual interface name (eth0, eth1, enp0s25) or a set of numbered interfaces (nic1, nic2, nic3).

use_dhcp

False

Use DHCP to get an IP address.

use_dhcpv6

False

Use DHCP to get a v6 IP address.

addresses

 

A list of IP addresses assigned to the interface.

routes

 

A list of routes assigned to the interface. For more information, see Section 4.2.7, “routes”.

mtu

1500

The maximum transmission unit (MTU) of the connection.

primary

False

Defines the interface as the primary interface. Required only when the interface is a member of a bond.

persist_mapping

False

Write the device alias configuration instead of the system names.

dhclient_args

None

Arguments that you want to pass to the DHCP client.

dns_servers

None

List of DNS servers that you want to use for the interface.

ethtool_opts

 

Set this option to "rx-flow-hash udp4 sdfn" to improve throughput when you use VXLAN on certain NICs.

Example
...
        edpm_network_config_template: |
          ---
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          network_config:
          - type: interface
            name: nic2
            ...

4.2.2. vlan

Defines a VLAN. Use the VLAN ID and subnet passed from the parameters section.

vlan options
Expand
OptionDefaultDescription

vlan_id

 

The VLAN ID.

device

 

The parent device to attach the VLAN. Use this parameter when the VLAN is not a member of an OVS bridge. For example, use this parameter to attach the VLAN to a bonded interface device.

use_dhcp

False

Use DHCP to get an IP address.

use_dhcpv6

False

Use DHCP to get a v6 IP address.

addresses

 

A list of IP addresses assigned to the VLAN.

routes

 

A list of routes assigned to the VLAN. For more information, see Section 4.2.7, “routes”.

mtu

1500

The maximum transmission unit (MTU) of the connection.

primary

False

Defines the VLAN as the primary interface.

persist_mapping

False

Write the device alias configuration instead of the system names.

dhclient_args

None

Arguments that you want to pass to the DHCP client.

dns_servers

None

List of DNS servers that you want to use for the VLAN.

Example
...
        edpm_network_config_template: |
          ---
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          network_config:
          ...
            - type: vlan
              device: nic{{ loop.index + 1 }}
              mtu: {{ lookup(vars, networks_lower[network] ~ _mtu) }}
              vlan_id: {{ lookup(vars, networks_lower[network] ~ _vlan_id) }}
              addresses:
              - ip_netmask:
                  {{ lookup(vars, networks_lower[network] ~ _ip) }}/{{ lookup(vars, networks_lower[network] ~ _cidr) }}
              routes: {{ lookup(vars, networks_lower[network] ~ _host_routes) }}
...
Example - creating a VLAN on an ovs_bridge

To create a VLAN on an ovs_bridge, you must place the VLAN configuration under the members section:

...
network_config:
- type: ovs_bridge
  name: br0
  use_dhcp: false
  members:
  - type: interface
    name: nic5
  - type: vlan
    vlan_id: 138
    use_dhcp: false
...
Example - creating a VLAN on an ovs_user_bridge

To create a VLAN on an ovs_user_bridge, you must place the VLAN configuration under the members section. The members must be either an ovs_dpdk_bond or and ovs_dpdk_port:

...
network_config:
-type: ovs_user_bridge
 name: br-link
 members:
   -type: ovs_dpdk_bond
    name: dpdkbond0
    mtu: 9000
    rx_queue: 4
    members:
      -type: ovs_dpdk_port
       name: dpdk0
       members:
         -type: interface
          name: nic2
      -type: ovs_dpdk_port
       name: dpdk1
       members:
         -type: interface
          name: nic3
   -type: vlan
    vlan_id:138
    use_dhcp: false
...

4.2.3. ovs_bridge

Defines a bridge in Open vSwitch (OVS), which connects multiple interface, ovs_bond, and vlan objects together.

The network interface type, ovs_bridge, takes a parameter name.

Important

Placing Control group networks on the ovs_bridge interface can cause down time. The OVS bridge connects to the Networking service (neutron) server to obtain configuration data. If the OpenStack control traffic, typically the Control Plane and Internal API networks, is placed on an OVS bridge, then connectivity to the neutron server is lost whenever you upgrade OVS, or the OVS bridge is restarted by the admin user or process. If downtime is not acceptable in these circumstances, then you must place the Control group networks on a separate interface or bond rather than on an OVS bridge:

  • You can achieve a minimal setting when you put the Internal API network on a VLAN on the provisioning interface and the OVS bridge on a second interface.
  • To implement bonding, you need at least two bonds (four network interfaces). Place the control group on a Linux bond. If the switch does not support LACP fallback to a single interface for PXE boot, then this solution requires at least five NICs.
Note

If you have multiple bridges, you must use distinct bridge names other than accepting the default name of bridge_name. If you do not use distinct names, then during the converge phase, two network bonds are placed on the same bridge.

ovs_bridge options
Expand
OptionDefaultDescription

name

 

Name of the bridge.

use_dhcp

False

Use DHCP to get an IP address.

use_dhcpv6

False

Use DHCP to get a v6 IP address.

addresses

 

A list of IP addresses assigned to the bridge.

routes

 

A list of routes assigned to the bridge. For more information, see Section 4.2.7, “routes”.

mtu

1500

The maximum transmission unit (MTU) of the connection.

members

 

A sequence of interface, VLAN, and bond objects that you want to use in the bridge.

ovs_options

 

A set of options to pass to OVS when creating the bridge.

ovs_extra

 

A set of options to to set as the OVS_EXTRA parameter in the network configuration file of the bridge.

defroute

True

Use a default route provided by the DHCP service. Only applies when you enable use_dhcp or use_dhcpv6.

persist_mapping

False

Write the device alias configuration instead of the system names.

dhclient_args

None

Arguments that you want to pass to the DHCP client.

dns_servers

None

List of DNS servers that you want to use for the bridge.

Example
...
        edpm_network_config_template: |
          ---
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          network_config:
          - type: ovs_bridge
            name: br-bond
            dns_servers: {{ ctlplane_dns_nameservers }}
            domain: {{ dns_search_domains }}
            members:
            - type: ovs_bond
              name: bond1
              mtu: {{ min_viable_mtu }}
              ovs_options: {{ bound_interface_ovs_options }}
              members:
              - type: interface
                name: nic2
                mtu: {{ min_viable_mtu }}
                primary: true
              - type: interface
                name: nic3
                mtu: {{ min_viable_mtu }}
                ...

4.2.4. Network interface bonding

You can bundle multiple physical NICs together to form a single logical channel known as a bond. You can configure bonds to provide redundancy for high availability systems or increased throughput.

Red Hat OpenStack Platform supports Open vSwitch (OVS) kernel bonds, OVS-DPDK bonds, and Linux kernel bonds.

Expand
Table 4.2. Supported interface bonding types
Bond typeType valueAllowed bridge typesAllowed members

OVS kernel bonds

ovs_bond

ovs_bridge

interface

OVS-DPDK bonds

ovs_dpdk_bond

ovs_user_bridge

ovs_dpdk_port

Linux kernel bonds

linux_bond

ovs_bridge

interface

Important

Do not combine ovs_bridge and ovs_user_bridge on the same node.

ovs_bond

Defines a bond in Open vSwitch (OVS) to join two or more interfaces together. This helps with redundancy and increases bandwidth.

Expand
Table 4.3. ovs_bond options
OptionDefaultDescription

name

 

Name of the bond.

use_dhcp

False

Use DHCP to get an IP address.

use_dhcpv6

False

Use DHCP to get a v6 IP address.

addresses

 

A list of IP addresses assigned to the bond.

routes

 

A list of routes assigned to the bond. For more information, see Section 4.2.7, “routes”.

mtu

1500

The maximum transmission unit (MTU) of the connection.

primary

False

Defines the interface as the primary interface.

members

 

A sequence of interface objects that you want to use in the bond.

ovs_options

 

A set of options to pass to OVS when creating the bond. For more information, see Table 4.4, “ovs_options parameters for OVS bonds”.

ovs_extra

 

A set of options to set as the OVS_EXTRA parameter in the network configuration file of the bond.

defroute

True

Use a default route provided by the DHCP service. Only applies when you enable use_dhcp or use_dhcpv6.

persist_mapping

False

Write the device alias configuration instead of the system names.

dhclient_args

None

Arguments that you want to pass to the DHCP client.

dns_servers

None

List of DNS servers that you want to use for the bond.

Expand
Table 4.4. ovs_options parameters for OVS bonds
ovs_optionDescription

bond_mode=balance-slb

Source load balancing (slb) balances flows based on source MAC address and output VLAN, with periodic rebalancing as traffic patterns change. When you configure a bond with the balance-slb bonding option, there is no configuration required on the remote switch. The Networking service (neutron) assigns each source MAC and VLAN pair to a link and transmits all packets from that MAC and VLAN through that link. A simple hashing algorithm based on source MAC address and VLAN number is used, with periodic rebalancing as traffic patterns change. The balance-slb mode is similar to mode 2 bonds used by the Linux bonding driver. You can use this mode to provide load balancing even when the switch is not configured to use LACP.

bond_mode=active-backup

When you configure a bond using active-backup bond mode, the Networking service keeps one NIC in standby. The standby NIC resumes network operations when the active connection fails. Only one MAC address is presented to the physical switch. This mode does not require switch configuration, and works when the links are connected to separate switches. This mode does not provide load balancing.

lacp=[active | passive | off]

Controls the Link Aggregation Control Protocol (LACP) behavior. Only certain switches support LACP. If your switch does not support LACP, use bond_mode=balance-slb or bond_mode=active-backup.

other-config:lacp-fallback-ab=true

Set active-backup as the bond mode if LACP fails.

other_config:lacp-time=[fast | slow]

Set the LACP heartbeat to one second (fast) or 30 seconds (slow). The default is slow.

other_config:bond-detect-mode=[miimon | carrier]

Set the link detection to use miimon heartbeats (miimon) or monitor carrier (carrier). The default is carrier.

other_config:bond-miimon-interval=100

If using miimon, set the heartbeat interval (milliseconds).

bond_updelay=1000

Set the interval (milliseconds) that a link must be up to be activated to prevent flapping.

other_config:bond-rebalance-interval=10000

Set the interval (milliseconds) that flows are rebalancing between bond members. Set this value to zero to disable flow rebalancing between bond members.

Example - OVS bond
...
        edpm_network_config_template: |
          ---
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          network_config:
          ...
            members:
              - type: ovs_bond
                name: bond1
                mtu: {{ min_viable_mtu }}
                ovs_options: {{ bond_interface_ovs_options }}
                members:
                - type: interface
                  name: nic2
                  mtu: {{ min_viable_mtu }}
                  primary: true
                - type: interface
                  name: nic3
                  mtu: {{ min_viable_mtu }}
Example - OVS DPDK bond

In this example, a bond is created as part of an OVS user space bridge:

        edpm_network_config_template: |
          ---
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          network_config:
          ...
            members:
            - type: ovs_user_bridge
              name: br-dpdk0
              members:
              - type: ovs_dpdk_bond
                name: dpdkbond0
                rx_queue: {{ num_dpdk_interface_rx_queues }}
                members:
                - type: ovs_dpdk_port
                  name: dpdk0
                  members:
                  - type: interface
                    name: nic4
                - type: ovs_dpdk_port
                  name: dpdk1
                  members:
                  - type: interface
                    name: nic5

4.2.5. LACP with OVS bonding modes

You can use Open vSwitch (OVS) bonds with the optional Link Aggregation Control Protocol (LACP). LACP is a negotiation protocol that creates a dynamic bond for load balancing and fault tolerance.

Use the following table to understand support compatibility for OVS kernel and OVS-DPDK bonded interfaces in conjunction with LACP options.

Important

Do not use OVS bonds on control and storage networks. Instead, use Linux bonds with VLAN and LACP.

If you use OVS bonds, and restart the OVS or the neutron agent for updates, hot fixes, and other events, the control plane can be disrupted.

Expand
Table 4.5. LACP options for OVS kernel and OVS-DPDK bond modes

Objective

OVS bond mode

Compatible LACP options

Notes

High availability (active-passive)

active-backup

active, passive, or off

 

Increased throughput (active-active)

balance-slb

active, passive, or off

  • Performance is affected by extra parsing per packet.
  • There is a potential for vhost-user lock contention.

balance-tcp

active or passive

  • As with balance-slb, performance is affected by extra parsing per packet and there is a potential for vhost-user lock contention.
  • LACP must be configured and enabled.
  • Set lb-output-action=true. For example:

    ovs-vsctl set port <bond port> other_config:lb-output-action=true

4.2.6. linux_bond

Defines a Linux bond that joins two or more interfaces together. This helps with redundancy and increases bandwidth. Ensure that you include the kernel-based bonding options in the bonding_options parameter.

Expand
Table 4.6. linux_bond options
OptionDefaultDescription

name

 

Name of the bond.

use_dhcp

False

Use DHCP to get an IP address.

use_dhcpv6

False

Use DHCP to get a v6 IP address.

addresses

 

A list of IP addresses assigned to the bond.

routes

 

A list of routes assigned to the bond. See Section 4.2.7, “routes”.

mtu

1500

The maximum transmission unit (MTU) of the connection.

members

 

A sequence of interface objects that you want to use in the bond.

bonding_options

 

A set of options when creating the bond. See bonding_options parameters for Linux bonds.

defroute

True

Use a default route provided by the DHCP service. Only applies when you enable use_dhcp or use_dhcpv6.

persist_mapping

False

Write the device alias configuration instead of the system names.

dhclient_args

None

Arguments that you want to pass to the DHCP client.

dns_servers

None

List of DNS servers that you want to use for the bond.

bonding_options parameters for Linux bonds
The bonding_options parameter sets the specific bonding options for the Linux bond. See the Linux bonding examples that follow this table:
Expand
Table 4.7. bonding_options
bonding_optionsDescription

mode

Sets the bonding mode, which in the example is 802.3ad or LACP mode. For more information about Linux bonding modes, see Configuring a network bond in Red Hat Enterprise Linux 9, Configuring and managing networking.

lacp_rate

Defines whether LACP packets are sent every 1 second, or every 30 seconds.

updelay

Defines the minimum amount of time that an interface must be active before it is used for traffic. This minimum configuration helps to mitigate port flapping outages.

miimon

The interval in milliseconds that is used for monitoring the port state using the MIIMON functionality of the driver.

Example - Linux bond
...
        edpm_network_config_template: |
          ---
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          network_config:
          - type: linux_bond
            name: bond1
            mtu: {{ min_viable_mtu }}
            bonding_options: "mode=802.3ad lacp_rate=fast updelay=1000 miimon=100 xmit_hash_policy=layer3+4"
            members:
              type: interface
              name: ens1f0
              mtu: {{ min_viable_mtu }}
              primary: true
            type: interface
              name: ens1f1
              mtu: {{ min_viable_mtu }}
              ...
Example - Linux bond: bonding two interfaces
...
        edpm_network_config_template: |
          ---
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          network_config:
          - type: linux_bond
            name: bond1
            members:
            - type: interface
              name: nic2
            - type: interface
              name: nic3
            bonding_options: "mode=802.3ad lacp_rate=[fast|slow] updelay=1000 miimon=100"
            ...
Example - Linux bond set to active-backup mode with one VLAN
....
        edpm_network_config_template: |
          ---
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          network_config:
          - type: linux_bond
            name: bond_api
            bonding_options: "mode=active-backup"
            use_dhcp: false
            dns_servers:
              get_param: DnsServers
            members:
            - type: interface
              name: nic3
              primary: true
            - type: interface
              name: nic4

            - type: vlan
              vlan_id: {{ lookup(vars, networks_lower[network] ~ _vlan_id) }}
              device: bond_api
              addresses:
              - ip_netmask:
                  get_param: InternalApiIpSubnet
Example - Linux bond on OVS bridge

In this example, the bond is set to 802.3ad with LACP mode and one VLAN:

...
        edpm_network_config_template: |
          ---
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          network_config:
          -  type: ovs_bridge
              name: br-tenant
              use_dhcp: false
              mtu: 9000
              members:
                - type: linux_bond
                  name: bond_tenant
                  bonding_options: "mode=802.3ad updelay=1000 miimon=100"
                  use_dhcp: false
                  dns_servers:
                    get_param: DnsServers
                  members:
                  - type: interface
                    name: p1p1
                    primary: true
                  - type: interface
                    name: p1p2
                - type: vlan
                  vlan_id: {get_param: TenantNetworkVlanID}
                  addresses:
                    - ip_netmask: {get_param: TenantIpSubnet}
                    ...

4.2.7. routes

Defines a list of routes to apply to a network interface, VLAN, bridge, or bond.

Expand
Table 4.8. routes options
OptionDefaultDescription

ip_netmask

None

IP and netmask of the destination network.

default

False

Sets this route to a default route. Equivalent to setting ip_netmask: 0.0.0.0/0.

next_hop

None

The IP address of the router used to reach the destination network.

Example - routes
...
        edpm_network_config_template: |
          ---
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          network_config:
          -  type: ovs_bridge
              name: br-tenant
              ...
              routes: {{ [ctlplane_host_routes] | flatten | unique }}
              ...

4.3. Example custom network interfaces

The following example illustrates how you can use a template to customize network interfaces for Red Hat OpenStack Services on OpenShift (RHOSO) environments.

Example
This template example configures the control group separate from the OVS bridge. The template uses five network interfaces and assigns a number of tagged VLAN devices to the numbered interfaces. On nic2 and nic3 the template creates a linux bond for control plane traffic. The template creates OVS bridges for the RHOSO data plane on nic4 and nic5.
        edpm_network_config_os_net_config_mappings:
          edpm-compute-0:
            dmiString: system-serial-number
            id: 3V3J4V3
            nic1: ec:2a:72:40:ca:2e
            nic2: 6c:fe:54:3f:8a:00
            nic3: 6c:fe:54:3f:8a:01
            nic4: 6c:fe:54:3f:8a:02
            nic5: 6c:fe:54:3f:8a:03
            nic6: e8:eb:d3:33:39:12
            nic7: e8:eb:d3:33:39:13

        edpm_network_config_template: |
          ---
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          - type: interface
            name: nic1
            use_dhcp: false
            use_dhcpv6: false
          - type: linux_bond
            name: bond_api
            use_dhcp: false
            use_dhcpv6: false
            bonding_options: "mode=active-backup"
            dns_servers: {{ ctlplane_dns_nameservers }}
            addresses:
            ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
            routes:
            - default: true
              next_hop: 192.168.122.1
            members:
              - type: interface
                name: nic2
                primary: true
              - type: interface
                name: nic3
          {% for network in nodeset_networks if network not in ['external', 'tenant'] %}
          - type: vlan
            mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
            vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
            device: bond_api
            addresses:
            - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
          {% endfor %}
          - type: ovs_bridge
            name: br-access
            use_dhcp: false
            use_dhcpv6: false
            members:
            - type: linux_bond
              name: bond_data
              mtu: {{ min_viable_mtu }}
              bonding_options: "mode=active-backup"
              members:
              - type: interface
                name: nic4
              - type: interface
                name: nic5
            - type: vlan
              vlan_id: {{ lookup('vars', networks_lower['tenant'] ~ '_vlan_id') }}
              mtu: {{ lookup('vars', networks_lower['tenant'] ~ '_mtu') }}
              addresses:
              - ip_netmask:
                  {{ lookup('vars', networks_lower['tenant'] ~ '_ip') }}/{{ lookup('vars', networks_lower['tenant'] ~ '_cidr') }}
              routes: {{ lookup('vars', networks_lower['tenant'] ~ '_host_routes') }}
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top