Chapter 11. Additional network configuration
This chapter follows on from the concepts and procedures outlined in Chapter 10, Custom network interface templates and provides some additional information to help configure parts of your overcloud network.
11.1. Configuring custom interfaces Copy linkLink copied to clipboard!
Individual interfaces might require modification. The following example shows the modifications that are necessary to use a second NIC to connect to an infrastructure network with DHCP addresses, and to use a third and fourth NIC for the bond:
The network interface template uses either the actual interface name (eth0, eth1, enp0s25) or a set of numbered interfaces (nic1, nic2, nic3). The network interfaces of hosts within a role do not have to be exactly the same when you use numbered interfaces (nic1, nic2, etc.) instead of named interfaces (eth0, eno2, etc.). For example, one host might have interfaces em1 and em2, while another has eno1 and eno2, but you can refer to the NICs of both hosts as nic1 and nic2.
The order of numbered interfaces corresponds to the order of named network interface types:
-
ethXinterfaces, such aseth0,eth1, etc. These are usually onboard interfaces. -
enoXinterfaces, such aseno0,eno1, etc. These are usually onboard interfaces. -
enXinterfaces, sorted alpha numerically, such asenp3s0,enp3s1,ens3, etc. These are usually add-on interfaces.
The numbered NIC scheme includes only live interfaces, for example, if the interfaces have a cable attached to the switch. If you have some hosts with four interfaces and some with six interfaces, use nic1 to nic4 and attach only four cables on each host.
You can configure os-net-config mappings for specific nodes, and assign aliases to the physical interfaces on each node to pre-determine which physical NIC maps to specific aliases, such as nic1 or nic2. You can also map a MAC address to a specified alias. You map interfaces to aliases in an environment file. You can map specific nodes by using the MAC address or DMI keyword, or you can map a group of nodes by using a DMI keyword. The following example configures three nodes and two node groups with aliases to the physical interfaces. The resulting configuration is applied by os-net-config. On each node, you can see the applied configuration in the interface_mapping section of the /etc/os-net-config/mapping.yaml file.
Example os-net-config-mappings.yaml
- 1
- Maps
node1to the specified MAC address, and assignsnic1as the alias for the MAC address on this node. - 2
- Maps
node3to the node with the system UUID "A8C85861-1B16-4803-8689-AFC62984F8F6", and assignsnic1as the alias forem3interface on this node. - 3
- The
dmiStringparameter must be set to a valid string keyword. For a list of the valid string keywords, see the DMIDECODE(8) man page. - 4
- Maps all the nodes in
nodegroup1to nodes with the product name "PowerEdge R630", and assignsnic1,nic2, andnic3as the alias for the named interfaces on these nodes.
-
If you want to use the
NetConfigDataLookupconfiguration, you must also include theos-net-config-mappings.yamlfile in theNodeUserDataresource registry. -
Normally,
os-net-configregisters only the interfaces that are already connected in anUPstate. However, if you hardcode interfaces with a custom mapping file, the interface is registered even if it is in aDOWNstate.
11.2. Configuring routes and default routes Copy linkLink copied to clipboard!
You can set the default route of a host in one of two ways. If the interface uses DHCP and the DHCP server offers a gateway address, the system uses a default route for that gateway. Otherwise, you can set a default route on an interface with a static IP.
Although the Linux kernel supports multiple default gateways, it uses only the gateway with the lowest metric. If there are multiple DHCP interfaces, this can result in an unpredictable default gateway. In this case, it is recommended to set defroute: false for interfaces other than the interface that uses the default route.
For example, you might want a DHCP interface (nic3) to be the default route. Use the following YAML snippet to disable the default route on another DHCP interface (nic2):
The defroute parameter applies only to routes obtained through DHCP.
To set a static route on an interface with a static IP, specify a route to the subnet. For example, you can set a route to the 10.1.2.0/24 subnet through the gateway at 172.17.0.1 on the Internal API network:
11.3. Configuring policy-based routing Copy linkLink copied to clipboard!
On Controller nodes, to configure unlimited access from different networks, configure policy-based routing. Policy-based routing uses route tables where, on a host with multiple interfaces, you can send traffic through a particular interface depending on the source address. You can route packets that come from different sources to different networks, even if the destinations are the same.
For example, you can configure a route to send traffic to the Internal API network, based on the source address of the packet, even when the default route is for the External network. You can also define specific route rules for each interface.
Red Hat OpenStack Platform uses the os-net-config tool to configure network properties for your overcloud nodes. The os-net-config tool manages the following network routing on Controller nodes:
-
Routing tables in the
/etc/iproute2/rt_tablesfile -
IPv4 rules in the
/etc/sysconfig/network-scripts/rule-{ifname}file -
IPv6 rules in the
/etc/sysconfig/network-scripts/rule6-{ifname}file -
Routing table specific routes in the
/etc/sysconfig/network-scripts/route-{ifname}
Prerequisites
- You have installed the undercloud successfully. For more information, see Installing director in the Director Installation and Usage guide.
-
You have rendered the default
.j2network interface templates from theopenstack-tripleo-heat-templatesdirectory. For more information, see Section 10.2, “Rendering default network interface templates for customization”.
Procedure
Create
route_tableandinterfaceentries in a custom NIC template from the~/templates/custom-nicsdirectory, define a route for the interface, and define rules that are relevant to your deployment:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
run-os-net-config.shscript location to an absolute path in each custom NIC template that you create. The script is located in the/usr/share/openstack-tripleo-heat-templates/network/scripts/directory on the undercloud:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Include your custom NIC configuration and network environment files in the deployment command, along with any other environment files relevant to your deployment:
openstack overcloud deploy --templates \ -e ~/templates/<custom-nic-template>
$ openstack overcloud deploy --templates \ -e ~/templates/<custom-nic-template> -e <OTHER_ENVIRONMENT_FILES>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Enter the following commands on a Controller node to verify that the routing configuration is functioning correctly:
cat /etc/iproute2/rt_tables $ ip route $ ip rule
$ cat /etc/iproute2/rt_tables $ ip route $ ip ruleCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.4. Configuring jumbo frames Copy linkLink copied to clipboard!
The Maximum Transmission Unit (MTU) setting determines the maximum amount of data transmitted with a single Ethernet frame. Using a larger value results in less overhead because each frame adds data in the form of a header. The default value is 1500 and using a higher value requires the configuration of the switch port to support jumbo frames. Most switches support an MTU of at least 9000, but many are configured for 1500 by default.
The MTU of a VLAN cannot exceed the MTU of the physical interface. Ensure that you include the MTU value on the bond or interface.
The Storage, Storage Management, Internal API, and Tenant networks all benefit from jumbo frames.
Routers typically cannot forward jumbo frames across Layer 3 boundaries. To avoid connectivity issues, do not change the default MTU for the Provisioning interface, External interface, and any floating IP interfaces.
11.5. Configuring ML2/OVN northbound path MTU discovery for jumbo frame fragmentation Copy linkLink copied to clipboard!
If a VM on your internal network sends jumbo frames to an external network, and the maximum transmission unit (MTU) of the internal network exceeds the MTU of the external network, a northbound frame can easily exceed the capacity of the external network.
ML2/OVS automatically handles this oversized packet issue, and ML2/OVN handles it automatically for TCP packets.
But to ensure proper handling of oversized northbound UDP packets in a deployment that uses the ML2/OVN mechanism driver, you need to perform additional configuration steps.
These steps configure ML2/OVN routers to return ICMP "fragmentation needed" packets to the sending VM, where the sending application can break the payload into smaller packets.
In east/west traffic, a RHOSP ML2/OVN deployment does not support fragmentation of packets that are larger than the smallest MTU on the east/west path. For example:
- VM1 is on Network1 with an MTU of 1300.
- VM2 is on Network2 with an MTU of 1200.
A ping in either direction between VM1 and VM2 with a size of 1171 or less succeeds. A ping with a size greater than 1171 results in 100 percent packet loss.
With no identified customer requirements for this type of fragmentation, Red Hat has no plans to add support.
Prerequisites
- RHEL 8.2.0.4 or later with kernel-4.18.0-193.20.1.el8_2 or later.
Procedure
Check the kernel version.
ovs-appctl -t ovs-vswitchd dpif/show-dp-features br-int
ovs-appctl -t ovs-vswitchd dpif/show-dp-features br-intCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
If the output includes
Check pkt length action: No, or if there is noCheck pkt length actionstring in the output, upgrade to RHEL 8.2.0.4 or later, or do not send jumbo frames to an external network that has a smaller MTU. If the output includes
Check pkt length action: Yes, set the following value in the [ovn] section of ml2_conf.ini.ovn_emit_need_to_frag = True
ovn_emit_need_to_frag = TrueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.6. Configuring the native VLAN on a trunked interface Copy linkLink copied to clipboard!
If a trunked interface or bond has a network on the native VLAN, the IP addresses are assigned directly to the bridge and there is no VLAN interface.
For example, if the External network is on the native VLAN, a bonded configuration looks like this:
When you move the address or route statements onto the bridge, remove the corresponding VLAN interface from the bridge. Make the changes to all applicable roles. The External network is only on the controllers, so only the controller template requires a change. The Storage network is attached to all roles, so if the Storage network is on the default VLAN, all roles require modifications.
11.7. Increasing the maximum number of connections that netfilter tracks Copy linkLink copied to clipboard!
The Red Hat OpenStack Platform (RHOSP) Networking service (neutron) uses netfilter connection tracking to build stateful firewalls and to provide network address translation (NAT) on virtual networks. There are some situations that can cause the kernel space to reach the maximum connection limit and result in errors such as nf_conntrack: table full, dropping packet. You can increase the limit for connection tracking (conntrack) and avoid these types of errors. You can increase the conntrack limit for one or more roles, or across all the nodes, in your RHOSP deployment.
Prerequisites
- A successful RHOSP undercloud installation.
Procedure
-
Log in to the undercloud host as the
stackuser. Source the undercloud credentials file:
source ~/stackrc
$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a custom YAML environment file.
Example
vi /home/stack/templates/my-environment.yaml
$ vi /home/stack/templates/my-environment.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Your environment file must contain the keywords
parameter_defaultsandExtraSysctlSettings. Enter a new value for the maximum number of connections that netfilter can track in the variable,net.nf_conntrack_max.Example
In this example, you can set the conntrack limit across all hosts in your RHOSP deployment:
parameter_defaults: ExtraSysctlSettings: net.nf_conntrack_max: value: 500000parameter_defaults: ExtraSysctlSettings: net.nf_conntrack_max: value: 500000Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
<role>Parameterparameter to set the conntrack limit for a specific role:parameter_defaults: <role>Parameters: ExtraSysctlSettings: net.nf_conntrack_max: value: <simultaneous_connections>parameter_defaults: <role>Parameters: ExtraSysctlSettings: net.nf_conntrack_max: value: <simultaneous_connections>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<role>with the name of the role.For example, use
ControllerParametersto set the conntrack limit for the Controller role, orComputeParametersto set the conntrack limit for the Compute role.Replace
<simultaneous_connections>with the quantity of simultaneous connections that you want to allow.Example
In this example, you can set the conntrack limit for only the Controller role in your RHOSP deployment:
parameter_defaults: ControllerParameters: ExtraSysctlSettings: net.nf_conntrack_max: value: 500000parameter_defaults: ControllerParameters: ExtraSysctlSettings: net.nf_conntrack_max: value: 500000Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe default value for
net.nf_conntrack_maxis500000connections. The maximum value is:4294967295.
Run the deployment command and include the core heat templates, environment files, and this new custom environment file.
ImportantThe order of the environment files is important as the parameters and resources defined in subsequent environment files take precedence.
Example
openstack overcloud deploy --templates \ -e /home/stack/templates/my-environment.yaml
$ openstack overcloud deploy --templates \ -e /home/stack/templates/my-environment.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow