Configuring networking services


Red Hat OpenStack Services on OpenShift 18.0

Configuring the Networking service (neutron) for managing networking traffic in a Red Hat OpenStack Services on OpenShift environment

OpenStack Documentation Team

Abstract

Configure your Networking service (neutron) in a Red Hat OpenStack Services on OpenShift environment.

Providing feedback on Red Hat documentation

We appreciate your feedback. Tell us how we can improve the documentation.

To provide documentation feedback for Red Hat OpenStack Services on OpenShift (RHOSO), create a Jira issue in the OSPRH Jira project.

Procedure

  1. Log in to the Red Hat Atlassian Jira.
  2. Click the following link to open a Create Issue page: Create issue
  3. Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue.
  4. Click Create.
  5. Review the details of the bug you created.

Chapter 1. Introduction to OpenStack networking

The Networking service (neutron) is the software-defined networking (SDN) component of Red Hat OpenStack Services on OpenShift (RHOSO). It manages traffic to and from virtual machine instances and provides core services such as routing, segmentation, DHCP, and metadata.

The Networking service also provides the API for virtual networking capabilities and management of switches, routers, ports, and firewalls.

1.1. Managing your RHOSO networks

With the Red Hat OpenStack Services on OpenShift (RHOSO) Networking service (neutron) you can effectively meet your site’s networking goals. You can do the following tasks:

  • Customize the networks used in your data plane.

    In RHOSO, the network configuration applied by default to the data plane nodes is the single NIC VLANs configuration. However, you can modify the network configuration that the OpenStack Operator applies for each data plane node set in your RHOSO environment.

  • Provide connectivity to VM instances within a project.

    Project networks primarily enable general (non-privileged) projects to manage networks without involving administrators. These networks are entirely virtual and require virtual routers to interact with other project networks and external networks such as the Internet. Project networks also usually provide DHCP and metadata services to VM (virtual machine) instances. RHOSO supports the following project network types: flat, VLAN, and GENEVE.

  • Set ingress and egress limits for traffic on VM instances.

    You can offer varying service levels for instances by using quality of service (QoS) policies to apply rate limits to egress and ingress traffic. You can apply QoS policies to individual ports. You can also apply QoS policies to a project network, where ports with no specific policy attached inherit the policy.

  • Control which projects can attach instances to a shared network.

    Using role-based access control (RBAC) policies in the RHOSO Networking service, cloud administrators can remove the ability for some projects to create networks and can instead allow them to attach to pre-existing networks that correspond to their project.

  • Secure your network at the port level.

    Security groups provide a container for virtual firewall rules that control ingress (inbound to instances) and egress (outbound from instances) network traffic at the port level. Security groups use a default deny policy and only contain rules that allow specific traffic. Each port can reference one or more security groups in an additive fashion. ML2/OVN uses the Open vSwitch firewall driver to translate security group rules to a configuration.

    By default, security groups are stateful. In ML2/OVN deployments, you can also create stateless security groups. A stateless security group can provide significant performance benefits. Unlike stateful security groups, stateless security groups do not automatically allow returning traffic, so you must create a complimentary security group rule to allow the return of related traffic.

1.2. Networking service components

The Red Hat OpenStack Services on OpenShift (RHOSO) Networking service (neutron) includes the following components:

  • API server

    The RHOSO networking API includes support for Layer 2 networking and IP Address Management (IPAM), as well as an extension for a Layer 3 router construct that enables routing between Layer 2 networks and gateways to external networks. RHOSO networking includes a growing list of plug-ins that enable interoperability with various commercial and open source network technologies, including routers, switches, virtual switches and software-defined networking (SDN) controllers.

  • Modular Layer 2 (ML2) plug-in and agents

    ML2 plugs and unplugs ports, creates networks or subnets, and provides IP addressing.

  • Messaging queue

    Accepts and routes RPC requests between RHOSO services to complete API operations.

1.3. Modular Layer 2 (ML2) networking

Modular Layer 2 (ML2) is the Red Hat OpenStack Services on OpenShift (RHOSO) networking core plug-in. The ML2 modular design enables the concurrent operation of mixed network technologies through mechanism drivers. Open Virtual Network (OVN) is the default mechanism driver used with ML2.

The ML2 framework distinguishes between the two kinds of drivers that can be configured:

Type drivers

Define how an RHOSO network is technically realized.

Each available network type is managed by an ML2 type driver, and they maintain any required type-specific network state. Validating the type-specific information for provider networks, type drivers are responsible for the allocation of a free segment in project networks. Examples of type drivers are GENEVE, VLAN, and flat networks.

Mechanism drivers

Define the mechanism to access an RHOSO network of a certain type.

The mechanism driver takes the information established by the type driver and applies it to the networking mechanisms that have been enabled. RHOSO uses the OVN mechanism driver.

Mechanism drivers can employ L2 agents, and by using RPC interact directly with external devices or controllers. You can use multiple mechanism and type drivers simultaneously to access different ports of the same virtual network.

1.4. ML2 network types

You can operate multiple network segments at the same time. ML2 supports the use and interconnection of multiple network segments. You don’t have to bind a port to a network segment because ML2 binds ports to segments with connectivity. Depending on the mechanism driver, ML2 supports the following network segment types:

  • Flat
  • VLAN
  • GENEVE tunnels
Flat
All virtual machine (VM) instances reside on the same network, which can also be shared with the hosts. No VLAN tagging or other network segregation occurs.
VLAN

With RHOSO networking users can create multiple provider or project networks using VLAN IDs (802.1Q tagged) that correspond to VLANs present in the physical network. This allows instances to communicate with each other across the environment. They can also communicate with dedicated servers, firewalls, load balancers and other network infrastructure on the same Layer 2 VLAN.

You can use VLANs to segment network traffic for computers running on the same switch. This means that you can logically divide your switch by configuring the ports to be members of different networks — they are basically mini-LANs that you can use to separate traffic for security reasons.

For example, if your switch has 24 ports in total, you can assign ports 1-6 to VLAN200, and ports 7-18 to VLAN201. As a result, computers connected to VLAN200 are completely separate from those on VLAN201; they cannot communicate directly, and if they wanted to, the traffic must pass through a router as if they were two separate physical switches. Firewalls can also be useful for governing which VLANs can communicate with each other.

GENEVE tunnels
Generic Network Virtualization Encapsulation (GENEVE) recognizes and accommodates changing capabilities and needs of different devices in network virtualization. It provides a framework for tunneling rather than being prescriptive about the entire system. GENEVE defines the content of the metadata flexibly that is added during encapsulation and tries to adapt to various virtualization scenarios. It uses UDP as its transport protocol and is dynamic in size using extensible option headers. GENEVE supports unicast, multicast, and broadcast. The GENEVE type driver is compatible with the ML2/OVN mechanism driver.

The Red Hat OpenStack Services on OpenShift (RHOSO) Networking service (neutron) is extensible. Extensions serve two purposes: they allow the introduction of new features in the API without requiring a version change and they allow the introduction of vendor specific niche functionality. Applications can programmatically list available extensions by performing a GET on the /extensions URI. Note that this is a versioned request; that is, an extension available in one API version might not be available in another.

The ML2 plug-in also supports extension drivers that allows other pluggable drivers to extend the core resources implemented in the ML2 plug-in for network objects. Examples of extension drivers include support for QoS, port security, and so on.

Chapter 2. Working with ML2/OVN

Red Hat OpenStack Services on OpenShift (RHOSO) networks are managed by the Networking service (neutron). The core of the Networking service is the Modular Layer 2 (ML2) plug-in, and the default mechanism driver for RHOSO ML2 plug-in is the Open Virtual Networking (OVN) mechanism driver.

2.1. Open Virtual Network (OVN)

Open Virtual Network (OVN), is a system to support logical network abstraction in virtual machine and container environments. OVN is used as the mechanism driver for the Red Hat OpenStack Services on OpenShift (RHOSO) Networking service (neutron).

Sometimes called open source virtual networking for Open vSwitch (OVS), OVN complements the existing capabilities of OVS to add native support for logical network abstractions, such as logical L2 and L3 overlays, security groups and services such as DHCP.

A physical network comprises physical wires, switches, and routers. A virtual network extends a physical network into a hypervisor or container platform, bridging VMs or containers into the physical network. An OVN logical network is a network implemented in software that is insulated from physical networks by tunnels or other encapsulations. This allows IP and other address spaces used in logical networks to overlap with those used on physical networks without causing conflicts. Logical network topologies can be arranged without regard for the topologies of the physical networks on which they run. Thus, VMs that are part of a logical network can migrate from one physical machine to another without network disruption.

The encapsulation layer prevents VMs and containers connected to a logical network from communicating with nodes on physical networks. For clustering VMs and containers, this can be acceptable or even desirable, but in many cases VMs and containers do need connectivity to physical networks. OVN provides multiple forms of gateways for this purpose.

An OVN deployment consists of several components:

Cloud Management System (CMS)
integrates OVN into a physical network by managing the OVN logical network elements and connecting the OVN logical network infrastructure to physical network elements. Some examples include OpenStack and OpenShift.
OVN databases
stores data representing the OVN logical and physical networks.
Hypervisors
run Open vSwitch and translate the OVN logical network into OpenFlow on a physical or virtual machine.
Gateways
extends a tunnel-based OVN logical network into a physical network by forwarding packets between tunnels and the physical network infrastructure.

Open Virtual Network (OVN) provides networking services for Red Hat OpenStack Services on OpenShift (RHOSO) environments. As illustrated in Figure 2.1, the OVN architecture consists of the following components and services:

Networking service
This service runs the OpenStack Networking API server, which provides the API for end-users and services to interact with OpenStack Networking. This server also integrates with the underlying database to store and retrieve project network, router, and load balancer details, among others.
Compute node
This node hosts the hypervisor that runs the virtual machines, also known as instances. A Compute node must be wired directly to the network in order to provide external connectivity for instances.
ML2 plug-in with OVN mechanism driver
The ML2 plug-in translates the OpenStack-specific networking configuration into the platform-neutral OVN logical networking configuration. It typically runs on the RHOSO control plane on OpenShift worker nodes.
OVN northbound (NB) database (ovn-nb)

This database stores the logical OVN networking configuration from the OVN ML2 plugin. It typically runs on the RHOSO control plane and listens on TCP port 6641.

The northbound database (OVN_Northbound) serves as the interface between OVN and a cloud management system such as RHOSO. RHOSO produces the contents of the northbound database.

The northbound database contains the current desired state of the network, presented as a collection of logical ports, logical switches, logical routers, and more. Every RHOSO Networking service (neutron) object is represented in a table in the northbound database.

OVN northbound service (ovn-northd)
This service converts the logical networking configuration from the OVN NB database to the logical data path flows and populates these on the OVN Southbound database. It typically runs on the RHOSO control plane.
OVN southbound (SB) database (ovn-sb)

This database stores the converted logical data path flows. It typically runs on the RHOSO control plane and listens on TCP port 6642.

The southbound database (OVN_Southbound) holds the logical and physical configuration state for the OVN system to support virtual network abstraction. The ovn-controller uses the information in this database to configure OVS to satisfy Networking service (neutron) requirements.

Note

The schema file for the NB database is located in /usr/share/ovn/ovn-nb.ovsschema, and the SB database schema file is in /usr/share/ovn/ovn-sb.ovsschema.

OVS database server (OVSDB)
Hosts the OVN Northbound and Southbound databases. Also interacts with ovs-vswitchd to host the OVS database conf.db.
OVN controller (ovn-controller)
This controller connects to the OVN SB database and acts as the open vSwitch controller to control and monitor network traffic. It runs on all Compute and gateway nodes.
OVN metadata agent (ovn-metadata-agent)

This agent creates the haproxy instances for managing the OVS interfaces, network namespaces and HAProxy processes used to proxy metadata API requests. The agent runs on all Compute and gateway nodes.

The OVN Networking service creates a unique network namespace for each virtual network that enables the metadata service. Each network accessed by the instances on the Compute node has a corresponding metadata namespace (ovnmeta-<network_uuid>).

OpenStack guest instances access the Networking metadata service available at the link-local IP address: 169.254.169.254. The neutron-ovn-metadata-agent has access to the host networks where the Compute metadata API exists. Each HAProxy is in a network namespace that is not able to reach the appropriate host network. HaProxy adds the necessary headers to the metadata API request and then forwards the request to the neutron-ovn-metadata-agent over a UNIX domain socket.

Figure 2.1. OVN architecture in a RHOSO environment

OVN architecture in a RHOSO environment

2.3. Layer 3 high availability with OVN

OVN supports Layer 3 high availability (L3 HA) without any special configuration in Red Hat OpenStack Services on OpenShift (RHOSO) environments,

Note

When you create a router, do not use --ha option because OVN routers are highly available by default. Openstack router create commands that include the --ha option fail.

OVN automatically schedules the router port to all available gateway nodes that can act as an L3 gateway on the specified external network. OVN L3 HA uses the gateway_chassis column in the OVN Logical_Router_Port table. Most functionality is managed by OpenFlow rules with bundled active_passive outputs. The ovn-controller handles the Address Resolution Protocol (ARP) responder and router enablement and disablement. Gratuitous ARPs for FIPs and router external addresses are also periodically sent by the ovn-controller.

Note

L3HA uses OVN to balance the routers back to the original gateway nodes to avoid any nodes becoming a bottleneck.

BFD monitoring
OVN uses the Bidirectional Forwarding Detection (BFD) protocol to monitor the availability of the gateway nodes. This protocol is encapsulated on top of the GENEVE tunnels established from node to node.

Each gateway node monitors all the other gateway nodes in a star topology in the deployment. Gateway nodes also monitor the compute nodes to let the gateways enable and disable routing of packets and ARP responses and announcements.

Each compute node uses BFD to monitor each gateway node and automatically steers external traffic, such as source and destination Network Address Translation (SNAT and DNAT), through the active gateway node for a given router. Compute nodes do not need to monitor other compute nodes.

Note

External network failures are not detected as would happen with an ML2-OVS configuration.

L3 HA for OVN supports the following failure modes:

  • The gateway node becomes disconnected from the network (tunneling interface).
  • ovs-vswitchd stops (ovs-switchd is responsible for BFD signaling)
  • ovn-controller stops (ovn-controller removes itself as a registered node).
Note

This BFD monitoring mechanism only works for link failures, not for routing failures.

On Red Hat OpenStack Services on OpenShift (RHOSO) environments, OVN uses a clustered database service model that applies the Raft consensus algorithm to enhance performance of OVS database protocol traffic and provide faster, more reliable failover handling.

A clustered database operates on a cluster of at least three database servers on different hosts. Servers use the Raft consensus algorithm to synchronize writes and share network traffic continuously across the cluster. The cluster elects one server as the leader. All servers in the cluster can handle database read operations, which mitigates potential bottlenecks on the control plane. Write operations are handled by the cluster leader.

If a server fails, a new cluster leader is elected and the traffic is redistributed among the remaining operational servers. The clustered database service model handles failovers more efficiently than the pacemaker-based model did. This mitigates related downtime and complications that can occur with longer failover times.

The leader election process requires a majority, so the fault tolerance capacity is limited by the highest odd number in the cluster. For example, a three-server cluster continues to operate if one server fails. A five-server cluster tolerates up to two failures. Increasing the number of servers to an even number does not increase fault tolerance. For example, a four-server cluster cannot tolerate more failures than a three-server cluster.

Most RHOSO deployments use three servers.

Clusters larger than five servers also work, with every two added servers allowing the cluster to tolerate an additional failure, but write performance decreases.

2.5. SR-IOV with ML2/OVN and native OVN DHCP

You can deploy a custom node set to use SR-IOV in an ML2/OVN deployment with native OVN DHCP in Red Hat OpenStack Services on OpenShift (RHOSO) environments.

Limitations

The following limitations apply to the use of SR-IOV with ML2/OVN and native OVN DHCP in this release.

  • All external ports are scheduled on a single gateway node because there is only one HA Chassis Group for all of the ports.
  • North/south routing on VF(direct) ports on VLAN tenant networks does not work with SR-IOV because the external ports are not colocated with the logical router’s gateway ports.

An OVN gateway connects the logical OpenStack tenant network to a physical external network. Many RHOSO environments have at least one OVN gateway and might have more than one physical external network and more than one OVN gateway.

Some environments do not include an OVN gateway. For example, an environment might not have an OVN gateway because connectivity is not required, because the environment does not use centralized floating IPs or routers and workloads directly connected to provider networks, or because some other connection method is used.

You can choose where OVN gateways are configured. OVN gateway location choices include the following:

Control plane
OVN gateways on RHOCP worker nodes that host the OpenStack controller services. Place the OVN gateway on a dedicated NIC whose sole purpose is to provide an interface to the OVN gateway.
Data plane
OVN gateways on dedicated Networker nodes on the data plane.

Control plane OVN gateways can be subject to more disruption than data plane OVN gateways.

You can place OVN gateways on dedicated NICs on the control plane nodes. This reduces the potential for interruption but requires an additional NIC.

Prerequisites

  • You have the oc command line tool installed on your workstation.
  • You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.
  • Each RHOCP worker node that hosts the RHOSO control plane has a NIC dedicated to an OVN gateway. Use the same NIC name for the dedicated NIC on each node. In addition, each worker node has at least the two NICs described in Red Hat OpenShift Container Platform cluster requirements.
  • Your OpenStackControlPlane custom resource (CR) file, openstack_control_plane.yaml, exists on your workstation.

Procedure

  1. Open the OpenStackControlPlane CR definition file, openstack_control_plane.yaml.
  2. Add the following ovnController configuration, including nicMappings, to the ovn service configuration:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack-control-plane
      namespace: openstack
    spec:
    ...
      ovn:
        template:
          ovnController:
            networkAttachment: tenant
            nicMappings:
              <network_name: nic_name>
    • Replace <network_name> with the name of the physical provider network your gateway is on. This should match the value of the --provider-physical-network argument to the openstack network create command used to create the network. For example, datacentre.
    • Replace <nic_name> with the name of the NIC connecting to the gateway network, such as enp6s0.
    • Optional: Add additional <network_name>:<nic_name> pairs under nicMappings as required.
  3. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack

    The ovn-operator creates the network attachment definitions, adds them to the pods, creates an external bridge, and configures external-ids:ovn-bridge-mappings. The setting external-ids:ovn-cms-options=enable-chassis-as-gw is configured by default.

  4. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    NAME 						STATUS 	MESSAGE
    openstack-control-plane 	Unknown 	Setup started

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

  5. Confirm that the control plane is deployed by reviewing the pods in the openstack namespace:

    $ oc get pods -n openstack

    The control plane is deployed when all the pods are either completed or running. Verify that ovn-controller and ovn-controller-ovs pods are running, and that the number of running pods is equal to the number of OCP control plane nodes where OpenStack control plane services are running.

Verification

  1. Run a remote shell command on the OpenStackClient pod to confirm that the OVN Controller Gateway Agents are running on the control plane nodes:

    $ oc rsh -n openstack openstackclient openstack network agent list
    Example output
    +--------------------------------------+------------------------------+---------+
    | ID                                   | agent_type                   | host    |
    +--------------------------------------+----------------------------------------+
    | 5335c34d-9233-47bd-92f1-fc7503270783 | OVN Controller Gateway agent | ctrl0   |
    | ff66288c-5a7c-41fb-ba54-6c781f95a81e | OVN Controller Gateway agent | ctrl1   |
    | 5335c34d-9233-47bd-92f1-fc7503270783 | OVN Controller Gateway agent | ctrl2   |
    +--------------------------------------+----------------------------------------+

You can configure a deployment with no control plane OVN gateways. For example, you configure data plane OVN gateways only, or you do not configure any OVN gateways.

Configuring a deployment with no control plane OVN gateways requires omitting the ovnController configuration from the control plane custom resource (CR).

Prerequisites

  • RHOSO 18.0.3 (Feature Release 1) or later.
  • You have the oc command line tool installed on your workstation.
  • You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.

Procedure

  1. Open your OpenStackControlPlane custom resource (CR) file, openstack_control_plane.yaml, on your workstation.
  2. If there is an ovnController section:

    1. Remove the ovnController section.
    2. Update the control plane:

      $ oc apply -f openstack_control_plane.yaml -n openstack

Chapter 4. Customizing data plane networks

In a Red Hat OpenStack Services on OpenShift (RHOSO) environment, the network configuration applied by default to the data plane nodes is the single NIC VLANs configuration. However, you can modify the network configuration that the OpenStack Operator applies.

You can customize the network configuration for each data plane node set in your Red Hat OpenStack Services on OpenShift (RHOSO) environment.

Prerequisites

  • You have the oc command line tool installed on your workstation.
  • You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.

Procedure

  1. Open the OpenStackDataPlaneNodeSet CR definition file for the node set you want to update, for example, my_data_plane_node_set.yaml.
  2. Add the required network configuration or modify the existing configuration. Place the configuration in the edpm_network_config_template under ansibleVars:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: my-data-plane-node-set
    spec:
      ...
      nodeTemplate:
        ...
        ansible:
          ansibleVars:
            edpm_network_config_template: |
              ---
              Network configuration options here
              ...

    When modifying your network configuration, refer to Section 4.2, “Network interface configuration options”.

  3. Save the OpenStackDataPlaneNodeSet CR definition file.
  4. Apply the updated OpenStackDataPlaneNodeSet CR configuration:

    $ oc apply -f my_data_plane_node_set.yaml
  5. Verify that the data plane resource has been updated:

    $ oc get openstackdataplanenodeset
    Sample output
    NAME                     STATUS MESSAGE
    my-data-plane-node-set   False  Deployment not started
  6. Create a file on your workstation to define the OpenStackDataPlaneDeployment CR, for example, my_data_plane_deploy.yaml:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: my-data-plane-deploy
    Tip

    Give the definition file and the OpenStackDataPlaneDeployment CR a unique and descriptive name that indicates the purpose of the modified node set.

  7. Add the OpenStackDataPlaneNodeSet CR that you modified:

    spec:
      nodeSets:
        - my-data-plane-node-set
  8. Save the OpenStackDataPlaneDeployment CR deployment file.
  9. Deploy the modified OpenStackDataPlaneNodeSet CR:

    $ oc create -f my_data_plane_deploy.yaml -n openstack

    You can view the Ansible logs while the deployment executes:

    $ oc get pod -l app=openstackansibleee -n openstack -w
    
    $ oc logs -l app=openstackansibleee -n openstack -f \
    --max-log-requests 10
  10. Verify that the modified OpenStackDataPlaneNodeSet CR is deployed:

    Example
    $ oc get openstackdataplanedeployment -n openstack
    Sample output
    NAME                     STATUS   MESSAGE
    my-data-plane-node-set   True     Setup Complete
  11. Repeat the oc get command until you see the NodeSet Ready message:

    Example
    $ oc get openstackdataplanenodeset -n openstack
    Sample output
    NAME                     STATUS   MESSAGE
    my-data-plane-node-set   True     NodeSet Ready

    For information on the meaning of the returned status, see Data plane conditions and states in the Deploying Red Hat OpenStack Services on OpenShift guide.

4.2. Network interface configuration options

Use the following tables to understand the available options for configuring network interfaces for Red Hat OpenStack Services on OpenShift (RHOSO) environments.

Note

Linux bridges are not supported in RHOSO. Instead, use methods such as Linux bonds and dedicated NICs for RHOSO traffic.

4.2.1. interface

Defines a single network interface. The network interface name uses either the actual interface name (eth0, eth1, enp0s25) or a set of numbered interfaces (nic1, nic2, nic3). The network interfaces of hosts within a role do not have to be exactly the same when you use numbered interfaces such as nic1 and nic2, instead of named interfaces such as eth0 and eno2. For example, one host might have interfaces em1 and em2, while another has eno1 and eno2, but you can refer to the NICs of both hosts as nic1 and nic2.

The order of numbered interfaces corresponds to the order of named network interface types:

  • ethX interfaces, such as eth0, eth1, and so on.

    Names appear in this format when consistent device naming is turned off in udev.

  • enoX and emX interfaces, such as eno0, eno1, em0, em1, and so on.

    These are usually on-board interfaces.

  • enX and any other interfaces, sorted alpha numerically, such as enp3s0, enp3s1, ens3, and so on.

    These are usually add-on interfaces.

The numbered NIC scheme includes only live interfaces, for example, if the interfaces have a cable attached to the switch. If you have some hosts with four interfaces and some with six interfaces, use nic1 to nic4 and attach only four cables on each host.

Expand
Table 4.1. interface options
OptionDefaultDescription

name

 

Name of the interface. The network interface name uses either the actual interface name (eth0, eth1, enp0s25) or a set of numbered interfaces (nic1, nic2, nic3).

use_dhcp

False

Use DHCP to get an IP address.

use_dhcpv6

False

Use DHCP to get a v6 IP address.

addresses

 

A list of IP addresses assigned to the interface.

routes

 

A list of routes assigned to the interface. For more information, see Section 4.2.7, “routes”.

mtu

1500

The maximum transmission unit (MTU) of the connection.

primary

False

Defines the interface as the primary interface. Required only when the interface is a member of a bond.

persist_mapping

False

Write the device alias configuration instead of the system names.

dhclient_args

None

Arguments that you want to pass to the DHCP client.

dns_servers

None

List of DNS servers that you want to use for the interface.

ethtool_opts

 

Set this option to "rx-flow-hash udp4 sdfn" to improve throughput when you use VXLAN on certain NICs.

Example
...
        edpm_network_config_template: |
          ---
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          network_config:
          - type: interface
            name: nic2
            ...

4.2.2. vlan

Defines a VLAN. Use the VLAN ID and subnet passed from the parameters section.

vlan options
Expand
OptionDefaultDescription

vlan_id

 

The VLAN ID.

device

 

The parent device to attach the VLAN. Use this parameter when the VLAN is not a member of an OVS bridge. For example, use this parameter to attach the VLAN to a bonded interface device.

use_dhcp

False

Use DHCP to get an IP address.

use_dhcpv6

False

Use DHCP to get a v6 IP address.

addresses

 

A list of IP addresses assigned to the VLAN.

routes

 

A list of routes assigned to the VLAN. For more information, see Section 4.2.7, “routes”.

mtu

1500

The maximum transmission unit (MTU) of the connection.

primary

False

Defines the VLAN as the primary interface.

persist_mapping

False

Write the device alias configuration instead of the system names.

dhclient_args

None

Arguments that you want to pass to the DHCP client.

dns_servers

None

List of DNS servers that you want to use for the VLAN.

Example
...
        edpm_network_config_template: |
          ---
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          network_config:
          ...
            - type: vlan
              device: nic{{ loop.index + 1 }}
              mtu: {{ lookup(vars, networks_lower[network] ~ _mtu) }}
              vlan_id: {{ lookup(vars, networks_lower[network] ~ _vlan_id) }}
              addresses:
              - ip_netmask:
                  {{ lookup(vars, networks_lower[network] ~ _ip) }}/{{ lookup(vars, networks_lower[network] ~ _cidr) }}
              routes: {{ lookup(vars, networks_lower[network] ~ _host_routes) }}
...
Example - creating a VLAN on an ovs_bridge

To create a VLAN on an ovs_bridge, you must place the VLAN configuration under the members section:

...
network_config:
- type: ovs_bridge
  name: br0
  use_dhcp: false
  members:
  - type: interface
    name: nic5
  - type: vlan
    vlan_id: 138
    use_dhcp: false
...
Example - creating a VLAN on an ovs_user_bridge

To create a VLAN on an ovs_user_bridge, you must place the VLAN configuration under the members section. The members must be either an ovs_dpdk_bond or and ovs_dpdk_port:

...
network_config:
-type: ovs_user_bridge
 name: br-link
 members:
   -type: ovs_dpdk_bond
    name: dpdkbond0
    mtu: 9000
    rx_queue: 4
    members:
      -type: ovs_dpdk_port
       name: dpdk0
       members:
         -type: interface
          name: nic2
      -type: ovs_dpdk_port
       name: dpdk1
       members:
         -type: interface
          name: nic3
   -type: vlan
    vlan_id:138
    use_dhcp: false
...

4.2.3. ovs_bridge

Defines a bridge in Open vSwitch (OVS), which connects multiple interface, ovs_bond, and vlan objects together.

The network interface type, ovs_bridge, takes a parameter name.

Important

Placing Control group networks on the ovs_bridge interface can cause down time. The OVS bridge connects to the Networking service (neutron) server to obtain configuration data. If the OpenStack control traffic, typically the Control Plane and Internal API networks, is placed on an OVS bridge, then connectivity to the neutron server is lost whenever you upgrade OVS, or the OVS bridge is restarted by the admin user or process. If downtime is not acceptable in these circumstances, then you must place the Control group networks on a separate interface or bond rather than on an OVS bridge:

  • You can achieve a minimal setting when you put the Internal API network on a VLAN on the provisioning interface and the OVS bridge on a second interface.
  • To implement bonding, you need at least two bonds (four network interfaces). Place the control group on a Linux bond. If the switch does not support LACP fallback to a single interface for PXE boot, then this solution requires at least five NICs.
Note

If you have multiple bridges, you must use distinct bridge names other than accepting the default name of bridge_name. If you do not use distinct names, then during the converge phase, two network bonds are placed on the same bridge.

ovs_bridge options
Expand
OptionDefaultDescription

name

 

Name of the bridge.

use_dhcp

False

Use DHCP to get an IP address.

use_dhcpv6

False

Use DHCP to get a v6 IP address.

addresses

 

A list of IP addresses assigned to the bridge.

routes

 

A list of routes assigned to the bridge. For more information, see Section 4.2.7, “routes”.

mtu

1500

The maximum transmission unit (MTU) of the connection.

members

 

A sequence of interface, VLAN, and bond objects that you want to use in the bridge.

ovs_options

 

A set of options to pass to OVS when creating the bridge.

ovs_extra

 

A set of options to to set as the OVS_EXTRA parameter in the network configuration file of the bridge.

defroute

True

Use a default route provided by the DHCP service. Only applies when you enable use_dhcp or use_dhcpv6.

persist_mapping

False

Write the device alias configuration instead of the system names.

dhclient_args

None

Arguments that you want to pass to the DHCP client.

dns_servers

None

List of DNS servers that you want to use for the bridge.

Example
...
        edpm_network_config_template: |
          ---
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          network_config:
          - type: ovs_bridge
            name: br-bond
            dns_servers: {{ ctlplane_dns_nameservers }}
            domain: {{ dns_search_domains }}
            members:
            - type: ovs_bond
              name: bond1
              mtu: {{ min_viable_mtu }}
              ovs_options: {{ bound_interface_ovs_options }}
              members:
              - type: interface
                name: nic2
                mtu: {{ min_viable_mtu }}
                primary: true
              - type: interface
                name: nic3
                mtu: {{ min_viable_mtu }}
                ...

4.2.4. Network interface bonding

You can bundle multiple physical NICs together to form a single logical channel known as a bond. You can configure bonds to provide redundancy for high availability systems or increased throughput.

Red Hat OpenStack Platform supports Open vSwitch (OVS) kernel bonds, OVS-DPDK bonds, and Linux kernel bonds.

Expand
Table 4.2. Supported interface bonding types
Bond typeType valueAllowed bridge typesAllowed members

OVS kernel bonds

ovs_bond

ovs_bridge

interface

OVS-DPDK bonds

ovs_dpdk_bond

ovs_user_bridge

ovs_dpdk_port

Linux kernel bonds

linux_bond

ovs_bridge

interface

Important

Do not combine ovs_bridge and ovs_user_bridge on the same node.

ovs_bond

Defines a bond in Open vSwitch (OVS) to join two or more interfaces together. This helps with redundancy and increases bandwidth.

Expand
Table 4.3. ovs_bond options
OptionDefaultDescription

name

 

Name of the bond.

use_dhcp

False

Use DHCP to get an IP address.

use_dhcpv6

False

Use DHCP to get a v6 IP address.

addresses

 

A list of IP addresses assigned to the bond.

routes

 

A list of routes assigned to the bond. For more information, see Section 4.2.7, “routes”.

mtu

1500

The maximum transmission unit (MTU) of the connection.

primary

False

Defines the interface as the primary interface.

members

 

A sequence of interface objects that you want to use in the bond.

ovs_options

 

A set of options to pass to OVS when creating the bond. For more information, see Table 4.4, “ovs_options parameters for OVS bonds”.

ovs_extra

 

A set of options to set as the OVS_EXTRA parameter in the network configuration file of the bond.

defroute

True

Use a default route provided by the DHCP service. Only applies when you enable use_dhcp or use_dhcpv6.

persist_mapping

False

Write the device alias configuration instead of the system names.

dhclient_args

None

Arguments that you want to pass to the DHCP client.

dns_servers

None

List of DNS servers that you want to use for the bond.

Expand
Table 4.4. ovs_options parameters for OVS bonds
ovs_optionDescription

bond_mode=balance-slb

Source load balancing (slb) balances flows based on source MAC address and output VLAN, with periodic rebalancing as traffic patterns change. When you configure a bond with the balance-slb bonding option, there is no configuration required on the remote switch. The Networking service (neutron) assigns each source MAC and VLAN pair to a link and transmits all packets from that MAC and VLAN through that link. A simple hashing algorithm based on source MAC address and VLAN number is used, with periodic rebalancing as traffic patterns change. The balance-slb mode is similar to mode 2 bonds used by the Linux bonding driver. You can use this mode to provide load balancing even when the switch is not configured to use LACP.

bond_mode=active-backup

When you configure a bond using active-backup bond mode, the Networking service keeps one NIC in standby. The standby NIC resumes network operations when the active connection fails. Only one MAC address is presented to the physical switch. This mode does not require switch configuration, and works when the links are connected to separate switches. This mode does not provide load balancing.

lacp=[active | passive | off]

Controls the Link Aggregation Control Protocol (LACP) behavior. Only certain switches support LACP. If your switch does not support LACP, use bond_mode=balance-slb or bond_mode=active-backup.

other-config:lacp-fallback-ab=true

Set active-backup as the bond mode if LACP fails.

other_config:lacp-time=[fast | slow]

Set the LACP heartbeat to one second (fast) or 30 seconds (slow). The default is slow.

other_config:bond-detect-mode=[miimon | carrier]

Set the link detection to use miimon heartbeats (miimon) or monitor carrier (carrier). The default is carrier.

other_config:bond-miimon-interval=100

If using miimon, set the heartbeat interval (milliseconds).

bond_updelay=1000

Set the interval (milliseconds) that a link must be up to be activated to prevent flapping.

other_config:bond-rebalance-interval=10000

Set the interval (milliseconds) that flows are rebalancing between bond members. Set this value to zero to disable flow rebalancing between bond members.

Example - OVS bond
...
        edpm_network_config_template: |
          ---
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          network_config:
          ...
            members:
              - type: ovs_bond
                name: bond1
                mtu: {{ min_viable_mtu }}
                ovs_options: {{ bond_interface_ovs_options }}
                members:
                - type: interface
                  name: nic2
                  mtu: {{ min_viable_mtu }}
                  primary: true
                - type: interface
                  name: nic3
                  mtu: {{ min_viable_mtu }}
Example - OVS DPDK bond

In this example, a bond is created as part of an OVS user space bridge:

        edpm_network_config_template: |
          ---
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          network_config:
          ...
            members:
            - type: ovs_user_bridge
              name: br-dpdk0
              members:
              - type: ovs_dpdk_bond
                name: dpdkbond0
                rx_queue: {{ num_dpdk_interface_rx_queues }}
                members:
                - type: ovs_dpdk_port
                  name: dpdk0
                  members:
                  - type: interface
                    name: nic4
                - type: ovs_dpdk_port
                  name: dpdk1
                  members:
                  - type: interface
                    name: nic5

4.2.5. LACP with OVS bonding modes

You can use Open vSwitch (OVS) bonds with the optional Link Aggregation Control Protocol (LACP). LACP is a negotiation protocol that creates a dynamic bond for load balancing and fault tolerance.

Use the following table to understand support compatibility for OVS kernel and OVS-DPDK bonded interfaces in conjunction with LACP options.

Important

Do not use OVS bonds on control and storage networks. Instead, use Linux bonds with VLAN and LACP.

If you use OVS bonds, and restart the OVS or the neutron agent for updates, hot fixes, and other events, the control plane can be disrupted.

Expand
Table 4.5. LACP options for OVS kernel and OVS-DPDK bond modes

Objective

OVS bond mode

Compatible LACP options

Notes

High availability (active-passive)

active-backup

active, passive, or off

 

Increased throughput (active-active)

balance-slb

active, passive, or off

  • Performance is affected by extra parsing per packet.
  • There is a potential for vhost-user lock contention.

balance-tcp

active or passive

  • As with balance-slb, performance is affected by extra parsing per packet and there is a potential for vhost-user lock contention.
  • LACP must be configured and enabled.
  • Set lb-output-action=true. For example:

    ovs-vsctl set port <bond port> other_config:lb-output-action=true

4.2.6. linux_bond

Defines a Linux bond that joins two or more interfaces together. This helps with redundancy and increases bandwidth. Ensure that you include the kernel-based bonding options in the bonding_options parameter.

Expand
Table 4.6. linux_bond options
OptionDefaultDescription

name

 

Name of the bond.

use_dhcp

False

Use DHCP to get an IP address.

use_dhcpv6

False

Use DHCP to get a v6 IP address.

addresses

 

A list of IP addresses assigned to the bond.

routes

 

A list of routes assigned to the bond. See Section 4.2.7, “routes”.

mtu

1500

The maximum transmission unit (MTU) of the connection.

members

 

A sequence of interface objects that you want to use in the bond.

bonding_options

 

A set of options when creating the bond. See bonding_options parameters for Linux bonds.

defroute

True

Use a default route provided by the DHCP service. Only applies when you enable use_dhcp or use_dhcpv6.

persist_mapping

False

Write the device alias configuration instead of the system names.

dhclient_args

None

Arguments that you want to pass to the DHCP client.

dns_servers

None

List of DNS servers that you want to use for the bond.

bonding_options parameters for Linux bonds
The bonding_options parameter sets the specific bonding options for the Linux bond. See the Linux bonding examples that follow this table:
Expand
Table 4.7. bonding_options
bonding_optionsDescription

mode

Sets the bonding mode, which in the example is 802.3ad or LACP mode. For more information about Linux bonding modes, see Configuring a network bond in Red Hat Enterprise Linux 9, Configuring and managing networking.

lacp_rate

Defines whether LACP packets are sent every 1 second, or every 30 seconds.

updelay

Defines the minimum amount of time that an interface must be active before it is used for traffic. This minimum configuration helps to mitigate port flapping outages.

miimon

The interval in milliseconds that is used for monitoring the port state using the MIIMON functionality of the driver.

Example - Linux bond
...
        edpm_network_config_template: |
          ---
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          network_config:
          - type: linux_bond
            name: bond1
            mtu: {{ min_viable_mtu }}
            bonding_options: "mode=802.3ad lacp_rate=fast updelay=1000 miimon=100 xmit_hash_policy=layer3+4"
            members:
              type: interface
              name: ens1f0
              mtu: {{ min_viable_mtu }}
              primary: true
            type: interface
              name: ens1f1
              mtu: {{ min_viable_mtu }}
              ...
Example - Linux bond: bonding two interfaces
...
        edpm_network_config_template: |
          ---
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          network_config:
          - type: linux_bond
            name: bond1
            members:
            - type: interface
              name: nic2
            - type: interface
              name: nic3
            bonding_options: "mode=802.3ad lacp_rate=[fast|slow] updelay=1000 miimon=100"
            ...
Example - Linux bond set to active-backup mode with one VLAN
....
        edpm_network_config_template: |
          ---
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          network_config:
          - type: linux_bond
            name: bond_api
            bonding_options: "mode=active-backup"
            use_dhcp: false
            dns_servers:
              get_param: DnsServers
            members:
            - type: interface
              name: nic3
              primary: true
            - type: interface
              name: nic4

            - type: vlan
              vlan_id: {{ lookup(vars, networks_lower[network] ~ _vlan_id) }}
              device: bond_api
              addresses:
              - ip_netmask:
                  get_param: InternalApiIpSubnet
Example - Linux bond on OVS bridge

In this example, the bond is set to 802.3ad with LACP mode and one VLAN:

...
        edpm_network_config_template: |
          ---
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          network_config:
          -  type: ovs_bridge
              name: br-tenant
              use_dhcp: false
              mtu: 9000
              members:
                - type: linux_bond
                  name: bond_tenant
                  bonding_options: "mode=802.3ad updelay=1000 miimon=100"
                  use_dhcp: false
                  dns_servers:
                    get_param: DnsServers
                  members:
                  - type: interface
                    name: p1p1
                    primary: true
                  - type: interface
                    name: p1p2
                - type: vlan
                  vlan_id: {get_param: TenantNetworkVlanID}
                  addresses:
                    - ip_netmask: {get_param: TenantIpSubnet}
                    ...

4.2.7. routes

Defines a list of routes to apply to a network interface, VLAN, bridge, or bond.

Expand
Table 4.8. routes options
OptionDefaultDescription

ip_netmask

None

IP and netmask of the destination network.

default

False

Sets this route to a default route. Equivalent to setting ip_netmask: 0.0.0.0/0.

next_hop

None

The IP address of the router used to reach the destination network.

Example - routes
...
        edpm_network_config_template: |
          ---
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          network_config:
          -  type: ovs_bridge
              name: br-tenant
              ...
              routes: {{ [ctlplane_host_routes] | flatten | unique }}
              ...

4.3. Example custom network interfaces

The following example illustrates how you can use a template to customize network interfaces for Red Hat OpenStack Services on OpenShift (RHOSO) environments.

Example
This template example configures the control group separate from the OVS bridge. The template uses five network interfaces and assigns a number of tagged VLAN devices to the numbered interfaces. On nic2 and nic3 the template creates a linux bond for control plane traffic. The template creates OVS bridges for the RHOSO data plane on nic4 and nic5.
        edpm_network_config_os_net_config_mappings:
          edpm-compute-0:
            dmiString: system-serial-number
            id: 3V3J4V3
            nic1: ec:2a:72:40:ca:2e
            nic2: 6c:fe:54:3f:8a:00
            nic3: 6c:fe:54:3f:8a:01
            nic4: 6c:fe:54:3f:8a:02
            nic5: 6c:fe:54:3f:8a:03
            nic6: e8:eb:d3:33:39:12
            nic7: e8:eb:d3:33:39:13

        edpm_network_config_template: |
          ---
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          - type: interface
            name: nic1
            use_dhcp: false
            use_dhcpv6: false
          - type: linux_bond
            name: bond_api
            use_dhcp: false
            use_dhcpv6: false
            bonding_options: "mode=active-backup"
            dns_servers: {{ ctlplane_dns_nameservers }}
            addresses:
            ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
            routes:
            - default: true
              next_hop: 192.168.122.1
            members:
              - type: interface
                name: nic2
                primary: true
              - type: interface
                name: nic3
          {% for network in nodeset_networks if network not in ['external', 'tenant'] %}
          - type: vlan
            mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
            vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
            device: bond_api
            addresses:
            - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
          {% endfor %}
          - type: ovs_bridge
            name: br-access
            use_dhcp: false
            use_dhcpv6: false
            members:
            - type: linux_bond
              name: bond_data
              mtu: {{ min_viable_mtu }}
              bonding_options: "mode=active-backup"
              members:
              - type: interface
                name: nic4
              - type: interface
                name: nic5
            - type: vlan
              vlan_id: {{ lookup('vars', networks_lower['tenant'] ~ '_vlan_id') }}
              mtu: {{ lookup('vars', networks_lower['tenant'] ~ '_mtu') }}
              addresses:
              - ip_netmask:
                  {{ lookup('vars', networks_lower['tenant'] ~ '_ip') }}/{{ lookup('vars', networks_lower['tenant'] ~ '_cidr') }}
              routes: {{ lookup('vars', networks_lower['tenant'] ~ '_host_routes') }}

Chapter 5. Configuring Networker nodes

In a Red Hat OpenStack Services on OpenShift (RHOSO) environment, you can add Networker nodes to the RHOSO data plane. Networker nodes can serve as gateways to external networks.

With or without gateways, Networker nodes can serve other purposes as well. For example, Networker nodes are required when you deploy the neutron-dhcp-agent in a RHOSO environment that has a routed spine-leaf network topology with DHCP relays running on leaf nodes. Networker nodes can also provide metadata for SR-IOV ports.

If your NICs support DPDK, you can enable DPDK on the Networker node interfaces to accelerate gateway traffic processing.

Networker nodes are similar to other RHOSO data plane nodes such as Compute nodes. Like Compute nodes, Networker nodes use the RHEL 9.4 or 9.6 operating system. Networker nodes and Compute nodes share some common services and configuration features, and each has a set of role-specific services and configurations. For example, unlike Compute nodes, Networker nodes do not require the Nova or libvirt services.

A data plane typically consists of multiple OpenStackDataPlaneNodeSet custom resources (CRs) to define sets of nodes with different configurations and roles. For example, one node set might define your data plane Networker nodes. Others might define functionally related sets of Compute nodes.

You can use pre-provisioned or unprovisioned nodes in an OpenStackDataPlaneNodeSet CR:

  • Pre-provisioned node: You have used your own tooling to install the operating system on the node before adding it to the data plane.
  • Unprovisioned node: The node does not have an operating system installed before you add it to the data plane. The node is provisioned by using the Cluster Baremetal Operator (CBO) as part of the data plane creation and deployment process.
Note

You cannot include both pre-provisioned and unprovisioned nodes in the same OpenStackDataPlaneNodeSet CR.

To create and deploy a data plane with or without Networker nodes, you must perform the following tasks:

  1. Create a Secret CR for each node set for Ansible to use to execute commands on the data plane nodes (Networker nodes and Compute nodes).
  2. Create the OpenStackDataPlaneNodeSet CRs that define the nodes and layout of the data plane.

    One of the following procedures describes how to create Networker node sets with pre-provisioned nodes. The other describes how to create Networker node sets with unprovisioned bare-metal nodes that must be provisioned during the node set deployment.

  3. Create the OpenStackDataPlaneDeployment CR that triggers the Ansible execution that deploys and configures the software for the specified list of OpenStackDataPlaneNodeSet CRs.

5.1. Prerequisites

  • A functional control plane, created with the OpenStack Operator.
  • You are logged on to a workstation that has access to the Red Hat OpenShift Container Platform (RHOCP) cluster as a user with cluster-admin privileges.

5.2. Creating the data plane secrets

You must create the Secret custom resources (CRs) that the data plane requires to be able to operate. The Secret CRs are used by the data plane nodes to secure access between nodes, to register the node operating systems with the Red Hat Customer Portal, to enable node repositories, and to provide Compute nodes with access to libvirt.

To enable secure access between nodes, you must generate two SSH keys and create an SSH key Secret CR for each key:

  • An SSH key to enable Ansible to manage the RHEL nodes on the data plane. Ansible executes commands with this user and key. You can create an SSH key for each OpenStackDataPlaneNodeSet CR in your data plane.

    • An SSH key to enable migration of instances between Compute nodes.

Prerequisites

  • Pre-provisioned nodes are configured with an SSH public key in the $HOME/.ssh/authorized_keys file for a user with passwordless sudo privileges. For more information, see Managing sudo access in the RHEL Configuring basic system settings guide.

Procedure

  1. For unprovisioned nodes, create the SSH key pair for Ansible:

    $ ssh-keygen -f <key_file_name> -N "" -t rsa -b 4096
    • Replace <key_file_name> with the name to use for the key pair.
  2. Create the Secret CR for Ansible and apply it to the cluster:

    $ oc create secret generic dataplane-ansible-ssh-private-key-secret \
    --save-config \
    --dry-run=client \
    --from-file=ssh-privatekey=<key_file_name> \
    --from-file=ssh-publickey=<key_file_name>.pub \
    [--from-file=authorized_keys=<key_file_name>.pub] -n openstack \
    -o yaml | oc apply -f -
    • Replace <key_file_name> with the name and location of your SSH key pair file.
    • Optional: Only include the --from-file=authorized_keys option for bare-metal nodes that must be provisioned when creating the data plane.
  3. If you are creating Compute nodes, create a secret for migration.

    1. Create the SSH key pair for instance migration:

      $ ssh-keygen -f ./nova-migration-ssh-key -t ecdsa-sha2-nistp521 -N ''
    2. Create the Secret CR for migration and apply it to the cluster:

      $ oc create secret generic nova-migration-ssh-key \
      --save-config \
      --from-file=ssh-privatekey=nova-migration-ssh-key \
      --from-file=ssh-publickey=nova-migration-ssh-key.pub \
      -n openstack \
      -o yaml | oc apply -f -
  4. For nodes that have not been registered to the Red Hat Customer Portal, create the Secret CR for subscription-manager credentials to register the nodes:

    $ oc create secret generic subscription-manager \
    --from-literal rhc_auth='{"login": {"username": "<subscription_manager_username>", "password": "<subscription_manager_password>"}}'
    • Replace <subscription_manager_username> with the username you set for subscription-manager.
    • Replace <subscription_manager_password> with the password you set for subscription-manager.
  5. Create a Secret CR that contains the Red Hat registry credentials:

    $ oc create secret generic redhat-registry --from-literal edpm_container_registry_logins='{"registry.redhat.io": {"<username>": "<password>"}}'
    • Replace <username> and <password> with your Red Hat registry username and password credentials.

      For information about how to create your registry service account, see the Knowledge Base article Creating Registry Service Accounts.

  6. If you are creating Compute nodes, create a secret for libvirt.

    1. Create a file on your workstation named secret_libvirt.yaml to define the libvirt secret:

      apiVersion: v1
      kind: Secret
      metadata:
       name: libvirt-secret
       namespace: openstack
      type: Opaque
      data:
       LibvirtPassword: <base64_password>
      • Replace <base64_password> with a base64-encoded string with maximum length 63 characters. You can use the following command to generate a base64-encoded password:

        $ echo -n <password> | base64
        Tip

        If you do not want to base64-encode the username and password, you can use the stringData field instead of the data field to set the username and password.

    2. Create the Secret CR:

      $ oc apply -f secret_libvirt.yaml -n openstack
  7. Verify that the Secret CRs are created:

    $ oc describe secret dataplane-ansible-ssh-private-key-secret
    $ oc describe secret nova-migration-ssh-key
    $ oc describe secret subscription-manager
    $ oc describe secret redhat-registry
    $ oc describe secret libvirt-secret

You can define an OpenStackDataPlaneNodeSet CR for each logical grouping of pre-provisioned nodes in your data plane that are Networker nodes. You can define as many node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet CR.

You use the nodeTemplate field to configure the common properties to apply to all nodes in an OpenStackDataPlaneNodeSet CR, and the nodes field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate.

Tip

For an example OpenStackDataPlaneNodeSet CR that configures a set of pre-provisioned Networker nodes, see Example OpenStackDataPlaneNodeSet CR for pre-provisioned Networker nodes.

If you want to use OVS-DPDK on a set of pre-provisioned Networker nodes, you must use a different configuration in the OpenStackDataPlaneNodeSet CR. For an example, see Example OpenStackDataPlaneNodeSet CR for pre-provisioned Networker nodes with DPDK.

Procedure

  1. Create a file on your workstation named openstack_preprovisioned_networker_node_set.yaml to define the OpenStackDataPlaneNodeSet CR:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: networker-nodes
      namespace: openstack
    spec:
      env:
        - name: ANSIBLE_FORCE_COLOR
          value: "True"
    • name - The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters. If necessary, replace the example name networker-nodes with a name that more accurately describes your node set.
    • env - Optional: a list of environment variables to pass to the pod.
  2. Include the services field to override the default services. Remove the nova, libvirt, and other services that are not required by a Networker node:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: networker-nodes
      namespace: openstack
    spec:
    ...
      services:
       - redhat
       - bootstrap
       - download-cache
       - reboot-os
       - configure-ovs-dpdk
       - configure-network
       - validate-network
       - install-os
       - configure-os
       - ssh-known-hosts
       - run-os
       - install-certs
       - ovn
       - neutron-metadata
       - neutron-dhcp
    • configure-ovs-dpdk - The configure-ovs-dpdk service is required only when DPDK nics are used in the deployment.
    • neutron-metadata - The neutron-metadata service is required only when SR-IOV ports are used in the deployment.
    • neutron-dhcp - You can optionally run the neutron-dhcp service on your Networker nodes. You might not need to use neutron-dhcp with OVN if your deployment uses DHCP relays, or advanced DHCP options that are supported by dnsmasq but not by the OVN DHCP implementation. .
  3. Connect the data plane to the control plane network:

    spec:
      ...
      networkAttachments:
        - ctlplane
  4. Enable the chassis as gateway:

    spec:
    ...
      nodeTemplate:
        ansible:
        ...
        edpm_enable_chassis_gw: true
  5. Specify that the nodes in this set are pre-provisioned:

    spec:
    ...
      nodeTemplate:
        ansible:
        ...
        edpm_enable_chassis_gw: true
        ...
       preProvisioned: true
  6. Add the SSH key secret that you created so that Ansible can connect to the data plane nodes:

      nodeTemplate:
        ansibleSSHPrivateKeySecret: <secret-key>
    • Replace <secret-key> with the name of the SSH key Secret CR you created for this node set in <link>[Creating the data plane secrets], for example, dataplane-ansible-ssh-private-key-secret.
  7. Create a Persistent Volume Claim (PVC) in the openstack namespace on your Red Hat OpenShift Container Platform (RHOCP) cluster to store logs. Set the volumeMode to Filesystem and accessModes to ReadWriteOnce. Do not request storage for logs from a PersistentVolume (PV) that uses the NFS volume plugin. NFS is incompatible with FIFO and the ansible-runner creates a FIFO file to store logs. For information about PVCs, see Understanding persistent storage in the RHOCP Storage guide and Red Hat OpenShift Container Platform cluster requirements in Planning your deployment.
  8. Enable persistent logging for the Networker nodes:

      nodeTemplate:
        ...
        extraMounts:
          - extraVolType: Logs
            volumes:
            - name: ansible-logs
              persistentVolumeClaim:
                claimName: <pvc_name>
            mounts:
            - name: ansible-logs
              mountPath: "/runner/artifacts"
    • Replace <pvc_name> with the name of the PVC storage on your RHOCP cluster.
  9. Specify the management network:

      nodeTemplate:
        ...
        managementNetwork: ctlplane
  10. Specify the Secret CRs used to source the usernames and passwords to register the operating system of the nodes that are not registered to the Red Hat Customer Portal, and enable repositories for your nodes. The following example demonstrates how to register your nodes to Red Hat Content Delivery Network (CDN). For information about how to register your nodes with Red Hat Satellite 6.13, see Managing Hosts.

      nodeTemplate:
        ...
        ansible:
          ansibleUser: cloud-admin
          ansiblePort: 22
          ansibleVarsFrom:
            - secretRef:
                name: subscription-manager
            - secretRef:
                name: redhat-registry
          ansibleVars:
            rhc_release: 9.4
            rhc_repositories:
                - {name: "*", state: disabled}
                - {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled}
                - {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled}
                - {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled}
                - {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled}
                - {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled}
                - {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled}
            edpm_bootstrap_release_version_package: []
  11. Add the network configuration template to apply to your Networker nodes.

      nodeTemplate:
        ...
        ansible:
          ...
          ansibleVars:
          ...
      nodes:
    
           ...
            neutron_physical_bridge_name: br-ex
            neutron_public_interface_name: eth0
            edpm_network_config_nmstate: true
            edpm_network_config_update: false
    • edpm_network_config_nmstate - Sets the os-net-config provider to nmstate. The default value is true. Change it to false only if a specific limitation of the nmstate provider requires you to use the ifcfg provider. For more information on advantages and limitations of the nmstate provider, see https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/planning_your_deployment/plan-networks_planning#plan-os-net-config_plan-network in Planning your deployment.
    • edpm_network_config_update - When deploying a node set for the first time, set the edpm_network_config_update variable to false. If you later modify edpm_network_config_template, first set edpm_network_config_update to true. After you complete the update, reset it to false.

      Important

      After an edpm_network_config_template update, you must reset edpm_network_config_update to false. Otherwise, the nodes could lose network access. Whenever edpm_network_config_update is true, the updated network configuration is reapplied every time an OpenStackDataPlaneDeployment CR is created that includes the configure-network service that is a member of the servicesOverride list.

      The following example applies a VLANs network configuration to a set of the data plane Networker nodes with DPDK:

              edpm_network_config_template: |
                ...
                {% set mtu_list = [ctlplane_mtu] %}
                {% for network in nodeset_networks %}
                {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
                {%- endfor %}
                {% set min_viable_mtu = mtu_list | max %}
                network_config:
                - type: ovs_user_bridge
                  name: {{ neutron_physical_bridge_name }}
                  mtu: {{ min_viable_mtu }}
                  use_dhcp: false
                  dns_servers: {{ ctlplane_dns_nameservers }}
                  domain: {{ dns_search_domains }}
                  addresses:
                  - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
                  routes: {{ ctlplane_host_routes }}
                  members:
                  - type: ovs_dpdk_port
                    name: dpdk0
                    members:
                    - type: interface
                      name: nic1
      
      
                - type: linux_bond
                  name: bond_api
                  use_dhcp: false
                  bonding_options: "mode=active-backup"
                  dns_servers: {{ ctlplane_dns_nameservers }}
                  members:
                  - type: interface
                    name: nic2
                    primary: true
      
      
                - type: vlan
                  vlan_id: {{ lookup('vars', networks_lower['internalapi'] ~ '_vlan_id') }}
                  device: bond_api
                  addresses:
                  - ip_netmask: {{ lookup('vars', networks_lower['internalapi'] ~ '_ip') }}/{{ lookup('vars', networks_lower['internalapi'] ~ '_cidr') }}
      
      
                - type: ovs_user_bridge
                  name: br-link0
                  use_dhcp: false
                  ovs_extra: "set port br-link0 tag={{ lookup('vars', networks_lower['tenant'] ~ '_vlan_id') }}"
                  addresses:
                  - ip_netmask: {{ lookup('vars', networks_lower['tenant'] ~ '_ip') }}/{{ lookup('vars', networks_lower['tenant'] ~ '_cidr')}}
                  members:
                  - type: ovs_dpdk_bond
                    name: dpdkbond0
                    mtu: 9000
                    rx_queue: 1
                    ovs_extra: "set port dpdkbond0 bond_mode=balance-slb"
                    members:
                    - type: ovs_dpdk_port
                      name: dpdk1
                      members:
                      - type: interface
                        name: nic3
                    - type: ovs_dpdk_port
                      name: dpdk2
                      members:
                      - type: interface
                        name: nic4
      
      
                - type: ovs_user_bridge
                  name: br-link1
                  use_dhcp: false
                  members:
                  - type: ovs_dpdk_bond
                    name: dpdkbond1
                    mtu: 9000
                    rx_queue: 1
                    ovs_extra: "set port dpdkbond1 bond_mode=balance-slb"
                    members:
                    - type: ovs_dpdk_port
                      name: dpdk3
                      members:
                      - type: interface
                        name: nic5
                    - type: ovs_dpdk_port
                      name: dpdk4
                      members:
                      - type: interface
                        name: nic6
              neutron_physical_bridge_name: br-ex

      The following example applies a VLANs network configuration to a set of data plane Networker nodes without DPDK:

      edpm_network_config_template: |
                …---
                {% set mtu_list = [ctlplane_mtu] %}
                {% for network in nodeset_networks %}
                {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
                {%- endfor %}
                {% set min_viable_mtu = mtu_list | max %}
                network_config:
                  - type: ovs_bridge
                    name: {{ neutron_physical_bridge_name }}
                    mtu: {{ min_viable_mtu }}
                    use_dhcp: false
                    dns_servers: {{ ctlplane_dns_nameservers }}
                    domain: {{ dns_search_domains }}
                    addresses:
                      - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
                    routes: {{ ctlplane_host_routes }}
                    members:
                      - type: interface
                        name: nic2
                        mtu: {{ min_viable_mtu }}
                        # force the MAC address of the bridge to this interface
                        primary: true
                {% for network in nodeset_networks %}
                      - type: vlan
                        mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
                        vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
                        addresses:
                          - ip_netmask: >-
                              {{
                                lookup('vars', networks_lower[network] ~ '_ip')
                              }}/{{
                                lookup('vars', networks_lower[network] ~ '_cidr')
                              }}
                        routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
                {% endfor %}

      For more information about data plane network configuration, see Customizing data plane networks in Configuring network services.

  12. Add the common configuration for the set of nodes in this group under the nodeTemplate section. Each node in this OpenStackDataPlaneNodeSet inherits this configuration. For information about the properties you can use to configure common node attributes, see OpenStackDataPlaneNodeSet CR spec properties in the Deploying Red Hat OpenStack Services on OpenShift guide.
  13. Define each node in this node set:

    ...
      nodes:
        edpm-networker-0:
          hostName: networker-0
          networks:
          - name: ctlplane
            subnetName: subnet1
            defaultRoute: true
            fixedIP: 192.168.122.100
          - name: internalapi
            subnetName: subnet1
            fixedIP: 172.17.0.100
          - name: storage
            subnetName: subnet1
            fixedIP: 172.18.0.100
          - name: tenant
            subnetName: subnet1
            fixedIP: 172.19.0.100
          ansible:
            ansibleHost: 192.168.122.100
            ansibleUser: cloud-admin
            ansibleVars:
              fqdn_internal_api: edpm-networker-0.example.com
        edpm-networker-1:
          hostName: edpm-networker-1
          networks:
          - name: ctlplane
            subnetName: subnet1
            defaultRoute: true
            fixedIP: 192.168.122.101
          - name: internalapi
            subnetName: subnet1
            fixedIP: 172.17.0.101
          - name: storage
            subnetName: subnet1
            fixedIP: 172.18.0.101
          - name: tenant
            subnetName: subnet1
            fixedIP: 172.19.0.101
          ansible:
            ansibleHost: 192.168.122.101
            ansibleUser: cloud-admin
            ansibleVars:
              fqdn_internal_api: edpm-networker-1.example.com
    • edpm-networker-0 - The node definition reference, for example, edpm-networker-0. Each node in the node set must have a node definition.
    • networks - Defines the IPAM and the DNS records for the node.
    • fixedIP - Specifies a predictable IP address for the network that must be in the allocation range defined for the network in the NetConfig CR.
    • ansibleVars - Node-specific Ansible variables that customize the node.

      Note
      • Nodes defined within the nodes section can configure the same Ansible variables that are configured in the nodeTemplate section. Where an Ansible variable is configured for both a specific node and within the nodeTemplate section, the node-specific values override those from the nodeTemplate section.
      • You do not need to replicate all the nodeTemplate Ansible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node.
      • When you define the networkData Secret for an individual node (such as edpm-compute-0), it acts as a complete override rather than a supplemental configuration. Because node-specific configurations override the inherited default values from the nodeTemplate section, you must ensure that your node-specific networkData Secret contains the full set of required network configurations for that node, not just the unique values.
      • Many ansibleVars include edpm in the name, which stands for "External Data Plane Management".

      For information about the properties you can use to configure node attributes, see OpenStackDataPlaneNodeSet CR spec properties in the Deploying Red Hat OpenStack Services on OpenShift guide..

  14. Save the openstack_preprovisioned_networker_node_set.yaml definition file.
  15. Create the data plane resources:

    $ oc create --save-config -f openstack_preprovisioned_networker_node_set.yaml -n openstack
  16. Verify that the data plane resources have been created by confirming that the status is SetupReady:

    $ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10m

    When the status is SetupReady the command returns a condition met message, otherwise it returns a timeout error.

    For information about the data plane conditions and states, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.

  17. Verify that the Secret resource was created for the node set:

    $ oc get secret | grep openstack-data-plane
    dataplanenodeset-openstack-data-plane Opaque 1 3m50s
  18. Verify the services were created:

    $ oc get openstackdataplaneservice -n openstack
    NAME                AGE
    bootstrap           46m
    ceph-client         46m
    ceph-hci-pre        46m
    configure-network   46m
    configure-os        46m
    ...

The following example OpenStackDataPlaneNodeSet CR creates a node set from pre-provisioned Networker nodes with some node-specific configuration. The pre-provisioned Networker nodes are provisioned when the node set is created. The example includes optional fields. Review the example and update the optional fields to the correct values for your environment or remove them before using the example in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment.

Update the name of the OpenStackDataPlaneNodeSet CR in this example to a name that reflects the nodes in the set. The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters.

apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
metadata:
  name: openstack-networker-nodes
  namespace: openstack
spec:
  services:
      - bootstrap
      - download-cache
      - reboot-os
      - configure-network
      - validate-network
      - install-os
      - configure-os
      - ssh-known-hosts
      - run-os
      - install-certs
      - ovn

  env:
    - name: ANSIBLE_FORCE_COLOR
      value: "True"
  networkAttachments:
    - ctlplane
  preProvisioned: true
  nodeTemplate:
    ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret
    extraMounts:
      - extraVolType: Logs
        volumes:
        - name: ansible-logs
          persistentVolumeClaim:
            claimName: <pvc_name>
        mounts:
        - name: ansible-logs
          mountPath: "/runner/artifacts"
    managementNetwork: ctlplane
    ansible:
      ansibleUser: cloud-admin
      ansiblePort: 22
      ansibleVarsFrom:
        - secretRef:
            name: subscription-manager
        - secretRef:
            name: redhat-registry
      ansibleVars:
        edpm_bootstrap_command: |
          set -e
          rhc_release: 9.4
          rhc_repositories:
            - {name: "*", state: disabled}
            - {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled}
            - {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled}
            - {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled}
            - {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled}
            - {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled}
            - {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled}
        edpm_bootstrap_release_version_package: []
        ...
        neutron_physical_bridge_name: br-ex
        edpm_network_config_template: |
          ---
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          network_config:
          - type: ovs_bridge
            name: {{ neutron_physical_bridge_name }}
            mtu: {{ min_viable_mtu }}
            use_dhcp: false
            dns_servers: {{ ctlplane_dns_nameservers }}
            domain: {{ dns_search_domains }}
            addresses:
            - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
            routes: {{ ctlplane_host_routes }}
            members:
            - type: interface
              name: nic1
              mtu: {{ min_viable_mtu }}
              # force the MAC address of the bridge to this interface
              primary: true
          {% for network in nodeset_networks %}
            - type: vlan
              mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
              vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
              addresses:
              - ip_netmask:
                  {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
              routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
          {% endfor %}
  nodes:
    edpm-networker-0:
      hostName: edpm-networker-0
      networks:
      - name: ctlplane
        subnetName: subnet1
        defaultRoute: true
        fixedIP: 192.168.122.100
      - name: internalapi
        subnetName: subnet1
        fixedIP: 172.17.0.100
      - name: storage
        subnetName: subnet1
        fixedIP: 172.18.0.100
      - name: tenant
        subnetName: subnet1
        fixedIP: 172.19.0.100
      ansible:
        ansibleHost: 192.168.122.100
        ansibleUser: cloud-admin
        ansibleVars:
          fqdn_internal_api: edpm-networker-0.example.com
    edpm-networker-1:
      hostName: edpm-networker-1
      networks:
      - name: ctlplane
        subnetName: subnet1
        defaultRoute: true
        fixedIP: 192.168.122.101
      - name: internalapi
        subnetName: subnet1
        fixedIP: 172.17.0.101
      - name: storage
        subnetName: subnet1
        fixedIP: 172.18.0.101
      - name: tenant
        subnetName: subnet1
        fixedIP: 172.19.0.101
      ansible:
        ansibleHost: 192.168.122.101
        ansibleUser: cloud-admin
        ansibleVars:
          fqdn_internal_api: edpm-networker-1.example.com

The following example OpenStackDataPlaneNodeSet CR creates a node set from pre-provisioned Networker nodes with OVS-DPDK and some node-specific configuration. The pre-provisioned Networker nodes with OVS-DPDK are provisioned when the node set is created. The example includes optional fields. Review the example and update the optional fields to the correct values for your environment or remove them before using the example in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment.

Update the name of the OpenStackDataPlaneNodeSet CR in this example to a name that reflects the nodes in the set. The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters.

apiVersion: v1
kind: ConfigMap
metadata:
  name: networker-nodeset-values
  annotations:
    config.kubernetes.io/local-config: "true"
data:
  root_password: cmVkaGF0Cg==
  preProvisioned: false
  baremetalSetTemplate:
    ctlplaneInterface: <control plane interface>
    cloudUserName: cloud-admin
    provisioningInterface: <provisioning network interface>
    bmhLabelSelector:
      app: openstack-networker
    passwordSecret:
      name: baremetalset-password-secret
      namespace: openstack
  ssh_keys:
    # Authorized keys that will have access to the dataplane networkers via SSH
    authorized: <authorized key>
    # The private key that will have access to the dataplane networkers via SSH
    private: <private key>
    # The public key that will have access to the dataplane networkers via SSH
    public: <public key>
  nodeset:
    ansible:
      ansibleUser: cloud-admin
      ansiblePort: 22
      ansibleVars:
        edpm_enable_chassis_gw: true
        ...
       ansibleVarsFrom:
        - secretRef:
            name: subscription-manager
        - secretRef:
            name: redhat-registry
      ansibleVars:
        edpm_bootstrap_command: |
        set -e
        rhc_release: 9.4
        rhc_repositories:
            - {name: "*", state: disabled}
            - {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled}
            - {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled}
            - {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled}
            - {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled}
            - {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled}
            - {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled}

          edpm_bootstrap_release_version_package: []
        ...
        edpm_network_config_template: |
          ...
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          network_config:
          - type: interface
            name: nic1
            use_dhcp: false


          - type: interface
            name: nic2
            use_dhcp: false


          - type: ovs_user_bridge
            name: {{ neutron_physical_bridge_name }}
            mtu: {{ min_viable_mtu }}
            use_dhcp: false
            dns_servers: {{ ctlplane_dns_nameservers }}
            domain: {{ dns_search_domains }}
            addresses:
            - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
            routes: {{ ctlplane_host_routes }}
            members:
            - type: ovs_dpdk_port
              rx_queue: 1
              name: dpdk0
              members:
              - type: interface
                name: nic3
        # These vars are for the network config templates themselves and are
        # considered EDPM network defaults.
        neutron_physical_bridge_name: br-ex
        neutron_public_interface_name: nic1
        # edpm_nodes_validation
        edpm_nodes_validation_validate_controllers_icmp: false
        edpm_nodes_validation_validate_gateway_icmp: false
        dns_search_domains: []
        gather_facts: false
        # edpm firewall, change the allowed CIDR if needed
        edpm_sshd_configure_firewall: true
        edpm_sshd_allowed_ranges:
          - 192.168.122.0/24
    networks:
      - defaultRoute: true
        name: ctlplane
        subnetName: subnet1
      - name: internalapi
        subnetName: subnet1
      - name: storage
        subnetName: subnet1
      - name: tenant
        subnetName: subnet1
    nodes:
      edpm-networker-0:
        hostName: edpm-networker-0
    services:
      - bootstrap
      - download-cache
      - reboot-os
      - configure-ovs-dpdk
      - configure-network
      - validate-network
      - install-os
      - configure-os
      - ssh-known-hosts
      - run-os
      - install-certs
      - ovn
      - neutron-metadata

To create Networker nodes with unprovisioned nodes, you must perform the following tasks:

  1. Create a BareMetalHost custom resource (CR) for each bare-metal Networker node.
  2. Define an OpenStackDataPlaneNodeSet CR for the Networker nodes.

5.4.1. Prerequisites

  • Your RHOCP cluster supports provisioning bare-metal nodes.
  • Your Cluster Baremetal Operator (CBO) is configured for provisioning.

You must create a BareMetalHost custom resource (CR) for each bare-metal Networker node. At a minimum, you must provide the data required to add the bare-metal Networker node on the network so that the remaining installation steps can access the node and perform the configuration.

Note

If you use the ctlplane interface for provisioning, to avoid the kernel rp_filter logic from dropping traffic, configure the DHCP service to use an address range different from the ctlplane address range. This ensures that the return traffic remains on the machine network interface.

Procedure

  1. The Bare Metal Operator (BMO) manages BareMetalHost custom resources (CRs) in the openshift-machine-api namespace by default. Update the Provisioning CR to watch all namespaces:

    $ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"watchAllNamespaces": true }}'
  2. If you are using virtual media boot for bare-metal Networker nodes and the nodes are not connected to a provisioning network, you must update the Provisioning CR to enable virtualMediaViaExternalNetwork, which enables bare-metal connectivity through the external network:

    $ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"virtualMediaViaExternalNetwork": true }}'
  3. Create a file on your workstation that defines the Secret CR with the credentials for accessing the Baseboard Management Controller (BMC) of each bare-metal Networker node in the node set:

    apiVersion: v1
    kind: Secret
    metadata:
      name: edpm-networker-0-bmc-secret
      namespace: openstack
    type: Opaque
    data:
      username: <base64_username>
      password: <base64_password>
    • Replace <base64_username> and <base64_password> with strings that are base64-encoded. You can use the following command to generate a base64-encoded string:

      $ echo -n <string> | base64
      Tip

      If you do not want to base64-encode the username and password, you can use the stringData field instead of the data field to set the username and password.

  4. Create a file named bmh_networker_nodes.yaml on your workstation, that defines the BareMetalHost CR for each bare-metal Networker node. The following example creates a BareMetalHost CR with the provisioning method Redfish virtual media:

    apiVersion: metal3.io/v1alpha1
    kind: BareMetalHost
    metadata:
      name: edpm-networker-0
      namespace: openstack
      labels:
        app: openstack-networker
        workload: networker
    spec:
    ...
      bmc:
        address: redfish-virtualmedia+http://192.168.111.1:8000/redfish/v1/Systems/e8efd888-f844-4fe0-9e2e-498f4ab7806d
        credentialsName: edpm-networker-0-bmc-secret
      bootMACAddress: 00:c7:e4:a7:e7:f3
      bootMode: UEFI
      online: false
     [preprovisioningNetworkDataName: <network_config_secret_name>]
    • labels - Metadata labels, such as app, workload, and nodeName are key-value pairs that provide varying levels of granularity for labelling nodes. You can use these labels when you create an OpenStackDataPlaneNodeSet CR to describe the configuration of bare-metal nodes to be provisioned or to define nodes in a node set.
    • address - The URL for communicating with the node’s BMC controller. For information about BMC addressing for other provisioning methods, see BMC addressing in the RHOCP Deploying installer-provisioned clusters on bare metal guide.
    • credentialsName - The name of the Secret CR you created in the previous step for accessing the BMC of the node.
    • preprovisioningNetworkDataName - Optional: The name of the network configuration secret in the local namespace to pass to the pre-provisioning image. The network configuration must be in nmstate format.

      For more information about how to create a BareMetalHost CR, see About the BareMetalHost resource in the RHOCP documentation.

  5. Create the BareMetalHost resources:

    $ oc create -f bmh_networker_nodes.yaml
  6. Verify that the BareMetalHost resources have been created and are in the Available state:

    $ oc get bmh
    NAME         STATE            CONSUMER              ONLINE   ERROR   AGE
    edpm-networker-0   Available      openstack-edpm        true             2d21h
    edpm-networker-1   Available      openstack-edpm        true             2d21h
    ...

Define an OpenStackDataPlaneNodeSet custom resource (CR) for a group of Networker nodes. You can define as many node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet CR.

You use the nodeTemplate field to configure the common properties to apply to all nodes in an OpenStackDataPlaneNodeSet CR, and the nodeTemplate.nodes field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate.

Tip

For an example OpenStackDataPlaneNodeSet CR that creates a node set from unprovisioned Networker nodes, see Example node set CR for unprovisioned Networker nodes with OVS-DPDK.

Prerequisites

Procedure

  1. Create a file on your workstation named openstack_unprovisioned_node_set.yaml to define the OpenStackDataPlaneNodeSet CR:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: openstack-data-plane namespace: openstack spec: tlsEnabled: true env: - name: ANSIBLE_FORCE_COLOR value: "True"**
    • name - The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters. Update the name in this example to a name that reflects the nodes in the set.
    • env - Optional: a list of environment variables to pass to the pod.
  2. Connect the data plane to the control plane network:

    spec:
      ...
      networkAttachments:
        - ctlplane
  3. Specify that the nodes in this set are unprovisioned and must be provisioned when creating the resource:

      preProvisioned: false
  4. Define the baremetalSetTemplate field to describe the configuration of the bare-metal nodes that must be provisioned when creating the resource:

      baremetalSetTemplate:
        deploymentSSHSecret: dataplane-ansible-ssh-private-key-secret
        bmhNamespace: <bmh_namespace>
        cloudUserName: <ansible_ssh_user>
        bmhLabelSelector:
          app: <bmh_label>
        ctlplaneInterface: <interface>
    • Replace <bmh_namespace> with the namespace defined in the corresponding BareMetalHost CR for the node, for example, openshift-machine-api.
    • Replace <ansible_ssh_user> with the username of the Ansible SSH user, for example, cloud-admin.
    • Replace <bmh_label> with the label defined in the corresponding BareMetalHost CR for the node, for example, openstack-networker. Metadata labels, such as app, workload, and nodeName are key-value pairs that provide varying levels of granularity for labelling nodes. Set the bmhLabelSelector field to select data plane nodes based on labels that match the labels in the corresponding BareMetalHost CR.
    • Replace <interface> with the control plane interface the node connects to, for example, enp6s0.
  5. If you created a custom OpenStackProvisionServer CR, add it to your baremetalSetTemplate definition:

      baremetalSetTemplate:
        ...
        provisionServerName: my-os-provision-server
  6. Add the SSH key secret that you created to enable Ansible to connect to the data plane nodes:

      nodeTemplate:
        ansibleSSHPrivateKeySecret: <secret-key>
    • Replace <secret-key> with the name of the SSH key Secret CR you created in <link>[Creating the data plane secrets], for example, dataplane-ansible-ssh-private-key-secret.
  7. Create a Persistent Volume Claim (PVC) in the openstack namespace on your Red Hat OpenShift Container Platform (RHOCP) cluster to store logs. Set the volumeMode to Filesystem and accessModes to ReadWriteOnce. Do not request storage for logs from a PersistentVolume (PV) that uses the NFS volume plugin. NFS is incompatible with FIFO and the ansible-runner creates a FIFO file to write to store logs. For information about PVCs, see Understanding persistent storage in the RHOCP Storage guide and Red Hat OpenShift Container Platform cluster requirements in Planning your deployment.
  8. Enable persistent logging for the data plane nodes:

      nodeTemplate:
        ...
        extraMounts:
          - extraVolType: Logs
            volumes:
            - name: ansible-logs
              persistentVolumeClaim:
                claimName: <pvc_name>
            mounts:
            - name: ansible-logs
              mountPath: "/runner/artifacts"
    • Replace <pvc_name> with the name of the PVC storage on your RHOCP cluster.
  9. Specify the management network:

      nodeTemplate:
        ...
        managementNetwork: ctlplane
  10. Specify the Secret CRs used to source the usernames and passwords to register the operating system of the nodes that are not registered to the Red Hat Customer Portal, and enable repositories for your nodes. The following example demonstrates how to register your nodes to Red Hat Content Delivery Network (CDN). For information about how to register your nodes with Red Hat Satellite 6.13, see Managing Hosts.

      nodeTemplate:
        ansible:
          ansibleUser: cloud-admin
          ansiblePort: 22
          ansibleVarsFrom:
            - secretRef:
                name: subscription-manager
            - secretRef:
                name: redhat-registry
          ansibleVars:
            rhc_release: 9.4
            rhc_repositories:
                - {name: "*", state: disabled}
                - {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled}
                - {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled}
                - {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled}
                - {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled}
                - {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled}
                - {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled}
            edpm_bootstrap_release_version_package: []
  11. Add the network configuration template to apply to your data plane nodes.

      nodeTemplate:
        ...
        ansible:
          ...
           ansiblePort: 22
          ansibleUser: cloud-admin
          ansibleVars:
            ...
            edpm_enable_chassis_gw: true
            edpm_network_config_nmstate: true
            ...
            neutron_physical_bridge_name: br-ex
            neutron_public_interface_name: eth0
            edpm_network_config_update: false
    • edpm_network_config_update - When deploying a node set for the first time, ensure that the edpm_network_config_update variable is set to false. If you later modify edpm_network_config_template, first set edpm_network_config_update to true. Reset it to false after the update.

      Important

      After an edpm_network_config_template update, you must reset edpm_network_config_update to false. Otherwise, the nodes could lose network access. Whenever edpm_network_config_update is true, the updated network configuration is reapplied every time an OpenStackDataPlaneDeployment CR is created that includes the configure-network service that is a member of the servicesOverride list.

      The following example applies a VLANs network configuration to a set of the data plane Networker nodes with DPDK:

              edpm_network_config_template: |
                ...
                {% set mtu_list = [ctlplane_mtu] %}
                {% for network in nodeset_networks %}
                {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
                {%- endfor %}
                {% set min_viable_mtu = mtu_list | max %}
                network_config:
                - type: ovs_user_bridge
                  name: {{ neutron_physical_bridge_name }}
                  mtu: {{ min_viable_mtu }}
                  use_dhcp: false
                  dns_servers: {{ ctlplane_dns_nameservers }}
                  domain: {{ dns_search_domains }}
                  addresses:
                  - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
                  routes: {{ ctlplane_host_routes }}
                  members:
                  - type: ovs_dpdk_port
                    driver: mlx5_core
                    name: dpdk0
                    mtu: {{ min_viable_mtu }}
                    members:
                    - type: sriov_vf
                      device: nic6
                      vfid: 0
                  - type: interface
                    name: nic1
                    mtu: {{ min_viable_mtu }}
                    # force the MAC address of the bridge to this interface
                    primary: true
                {% for network in nodeset_networks %}
                  - type: vlan
                    mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
                    vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
                    addresses:
                    - ip_netmask:
                        {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
                    routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
                {% endfor %}

      The following example applies a VLANs network configuration to a set of data plane Networker nodes without DPDK:

      edpm_network_config_template: |
                …---
                {% set mtu_list = [ctlplane_mtu] %}
                {% for network in nodeset_networks %}
                {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
                {%- endfor %}
                {% set min_viable_mtu = mtu_list | max %}
                network_config:
                  - type: ovs_bridge
                    name: {{ neutron_physical_bridge_name }}
                    mtu: {{ min_viable_mtu }}
                    use_dhcp: false
                    dns_servers: {{ ctlplane_dns_nameservers }}
                    domain: {{ dns_search_domains }}
                    addresses:
                      - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
                    routes: {{ ctlplane_host_routes }}
                    members:
                      - type: interface
                        name: nic2
                        mtu: {{ min_viable_mtu }}
                        # force the MAC address of the bridge to this interface
                        primary: true
                {% for network in nodeset_networks %}
                      - type: vlan
                        mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
                        vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
                        addresses:
                          - ip_netmask: >-
                              {{
                                lookup('vars', networks_lower[network] ~ '_ip')
                              }}/{{
                                lookup('vars', networks_lower[network] ~ '_cidr')
                              }}
                        routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
                {% endfor %}

      For more information about data plane network configuration, see Customizing data plane networks in Configuring network services.

  12. Add the common configuration for the set of nodes in this group under the nodeTemplate section. Each node in this OpenStackDataPlaneNodeSet inherits this configuration. For information about the properties you can use to configure common node attributes, see OpenStackDataPlaneNodeSet CR spec properties in the Deploying Red Hat OpenStack Services on OpenShift guide.
  13. Define each node in this node set:

      nodes:
        edpm-networker-0:
          hostName: networker-0
          networks:
          - name: ctlplane
            subnetName: subnet1
            defaultRoute: true
            fixedIP: 192.168.122.100
          - name: internalapi
            subnetName: subnet1
            fixedIP: 172.17.0.100
          - name: storage
            subnetName: subnet1
            fixedIP: 172.18.0.100
          - name: tenant
            subnetName: subnet1
            fixedIP: 172.19.0.100
          ansible:
            ansibleHost: 192.168.122.100
            ansibleUser: cloud-admin
            ansibleVars:
              fqdn_internal_api: edpm-networker-0.example.com
          bmhLabelSelector:
            nodeName: edpm-networker-0
        edpm-networker-1:
          hostName: edpm-networker-1
          networks:
          - name: ctlplane
            subnetName: subnet1
            defaultRoute: true
            fixedIP: 192.168.122.101
          - name: internalapi
            subnetName: subnet1
            fixedIP: 172.17.0.101
          - name: storage
            subnetName: subnet1
            fixedIP: 172.18.0.101
          - name: tenant
            subnetName: subnet1
            fixedIP: 172.19.0.101
          ansible:
            ansibleHost: 192.168.122.101
            ansibleUser: cloud-admin
            ansibleVars:
              fqdn_internal_api: edpm-networker-1.example.com
          bmhLabelSelector:
            nodeName: edpm-networker-1
    • edpm-networker-0 - The node definition reference, for example, edpm-networker-0. Each node in the node set must have a node definition.
    • networks - Defines the IPAM and the DNS records for the node.
    • fixedIP - Specifies a predictable IP address for the network that must be in the allocation range defined for the network in the NetConfig CR.
    • bmhLabelSelector - Optional: The BareMetalHost CR metadata label that selects the BareMetalHost CR for the data plane node. The label can be any label that is defined for the BareMetalHost CR. The label is used with the bmhLabelSelector label configured in the baremetalSetTemplate definition to select the BareMetalHost for the node.
    Note
    • Nodes defined within the nodes section can configure the same Ansible variables that are configured in the nodeTemplate section. Where an Ansible variable is configured for both a specific node and within the nodeTemplate section, the node-specific values override those from the nodeTemplate section.
    • You do not need to replicate all the nodeTemplate Ansible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node.
    • Many ansibleVars include edpm in the name, which stands for "External Data Plane Management".

    + For information about the properties you can use to configure common node attributes, see OpenStackDataPlaneNodeSet CR spec properties in the Deploying Red Hat OpenStack Services on OpenShift guide.

  14. Save the openstack_unprovisioned_node_set.yaml definition file.
  15. Create the data plane resources:

    $ oc create --save-config -f openstack_unprovisioned_node_set.yaml -n openstack
  16. Verify that the data plane resources have been created by confirming that the status is SetupReady:

    $ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10m

    When the status is SetupReady the command returns a condition met message, otherwise it returns a timeout error.

    For information about the data plane conditions and states, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.

  17. Verify that the Secret resource was created for the node set:

    $ oc get secret -n openstack | grep openstack-data-plane
    dataplanenodeset-openstack-data-plane Opaque 1 3m50s
  18. Verify that the nodes have transitioned to the provisioned state:

    $ oc get bmh
    NAME            STATE         CONSUMER               ONLINE   ERROR   AGE
    edpm-networker-0  provisioned   openstack-data-plane   true             3d21h
  19. Verify that the services were created:

    $ oc get openstackdataplaneservice -n openstack
    NAME                    AGE
    bootstrap               8m40s
    ceph-client             8m40s
    ceph-hci-pre            8m40s
    configure-network       8m40s
    configure-os            8m40s
    ...

The following example OpenStackDataPlaneNodeSet CR creates a node set from unprovisioned Networker nodes with OVS-DPDK and some node-specific configuration. The unprovisioned Networker nodes are provisioned when the node set is created. Update the name of the OpenStackDataPlaneNodeSet CR in this example to a name that reflects the nodes in the set. The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters.

apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
metadata:
  name: networker-nodes
  namespace: openstack

 services:
  - redhat
  - bootstrap
  - download-cache
  - reboot-os
  - configure-ovs-dpdk
  - configure-network
  - validate-network
  - install-os
  - configure-os
  - ssh-known-hosts
  - run-os
  - install-certs
  - ovn
  - neutron-metadata

  nodeTemplate:
    ansible:
      ansibleVars:
        edpm_enable_chassis_gw: true
        edpm_kernel_args: default_hugepagesz=1GB hugepagesz=1G hugepages=64 iommu=pt
          intel_iommu=on tsx=off isolcpus=2-47,50-95
        edpm_network_config_nmstate: true
        ...
        edpm_network_config_template: |
          ...
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          network_config:
          - type: interface
            name: nic1
            use_dhcp: false

          - type: sriov_pf
            name: nic6
            mtu: 9000
            numvfs: 2
            use_dhcp: false
            defroute: false
            nm_controlled: true
            hotplug: true
            promisc: false

          - type: ovs_user_bridge
            name: {{ neutron_physical_bridge_name }}
            mtu: {{ min_viable_mtu }}
            use_dhcp: false
            dns_servers: {{ ctlplane_dns_nameservers }}
            domain: {{ dns_search_domains }}
            addresses:
            - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
            routes: {{ ctlplane_host_routes }}
            members:
            - type: ovs_dpdk_port
              driver: mlx5_core
              name: dpdk0
              mtu: {{ min_viable_mtu }}
              members:
              - type: sriov_vf
                device: nic6
                vfid: 0

          - type: linux_bond
            name: bond_api
            use_dhcp: false
            bonding_options: "mode=active-backup"
            dns_servers: {{ ctlplane_dns_nameservers }}
            members:
            - type: sriov_vf
              device: nic6
              driver: mlx5_core
              mtu: {{ min_viable_mtu }}
              spoofcheck: false
              promisc: false
              vfid: 1
              primary: true

          - type: vlan
            vlan_id: {{ lookup('vars', networks_lower['internalapi'] ~ '_vlan_id') }}
            device: bond_api
            addresses:
            - ip_netmask: {{ lookup('vars', networks_lower['internalapi'] ~ '_ip') }}/{{ lookup('vars', networks_lower['internalapi'] ~ '_cidr') }}

          - type: ovs_user_bridge
            name: br-link0
            use_dhcp: false
            ovs_extra: "set port br-link0 tag={{ lookup('vars', networks_lower['tenant'] ~ '_vlan_id') }}"
            addresses:
            - ip_netmask: {{ lookup('vars', networks_lower['tenant'] ~ '_ip') }}/{{ lookup('vars', networks_lower['tenant'] ~ '_cidr')}}
            members:
            - type: ovs_dpdk_bond
              name: dpdkbond0
              mtu: 9000
              rx_queue: 1
              ovs_extra: "set port dpdkbond0 bond_mode=balance-slb"
              members:
              - type: ovs_dpdk_port
                name: dpdk1
                members:
                - type: interface
                  name: nic4
              - type: ovs_dpdk_port
                name: dpdk2
                members:
                - type: interface
                  name: nic5

          - type: ovs_user_bridge
            name: br-link1
            use_dhcp: false
            members:
            - type: ovs_dpdk_bond
              name: dpdkbond1
              mtu: 9000
              rx_queue: 1
              ovs_extra: "set port dpdkbond1 bond_mode=balance-slb"
              members:
              - type: ovs_dpdk_port
                name: dpdk3
                members:
                - type: interface
                  name: nic2
              - type: ovs_dpdk_port
                name: dpdk4
                members:
                - type: interface
                  name: nic3
        edpm_ovn_bridge_mappings:
        - access:br-ex
        - dpdkmgmt:br-link0
        - dpdkdata0:br-link1
        edpm_ovs_dpdk_memory_channels: 4
        edpm_ovs_dpdk_pmd_core_list: 2,3,50,51
        edpm_ovs_dpdk_socket_memory: 4096,4096
        edpm_tuned_isolated_cores: 2-47,50-95
        edpm_tuned_profile: cpu-partitioning
        neutron_physical_bridge_name: br-ex
        neutron_public_interface_name: eth0

5.5. Deploying the data plane

You use the OpenStackDataPlaneDeployment CRD to configure the services on the data plane nodes and deploy the data plane. You control the execution of Ansible on the data plane by creating OpenStackDataPlaneDeployment custom resources (CRs). Each OpenStackDataPlaneDeployment CR models a single Ansible execution. When the OpenStackDataPlaneDeployment successfully completes execution, it does not automatically execute the Ansible again, even if the OpenStackDataPlaneDeployment or related OpenStackDataPlaneNodeSet resources are changed. To start another Ansible execution, you must create another OpenStackDataPlaneDeployment CR.

Create an OpenStackDataPlaneDeployment (CR) that deploys each of your OpenStackDataPlaneNodeSet CRs.

Procedure

  1. Create a file on your workstation named openstack_data_plane_deploy.yaml to define the OpenStackDataPlaneDeployment CR:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: data-plane-deploy
      namespace: openstack
    • name - The OpenStackDataPlaneDeployment CR name must be unique, must consist of lower case alphanumeric characters, - (hyphen) or . (period), and must start and end with an alphanumeric character. Update the name in this example to a name that reflects the node sets in the deployment.
  2. Add all the OpenStackDataPlaneNodeSet CRs that you want to deploy:

    spec:
      nodeSets:
        - openstack-data-plane
        - <nodeSet_name>
        - ...
        - <nodeSet_name>
    • Replace <nodeSet_name> with the names of the OpenStackDataPlaneNodeSet CRs that you want to include in your data plane deployment.
  3. Save the openstack_data_plane_deploy.yaml deployment file.
  4. Deploy the data plane:

    $ oc create -f openstack_data_plane_deploy.yaml -n openstack

    You can view the Ansible logs while the deployment executes:

    $ oc get pod -l app=openstackansibleee -w
    $ oc logs -l app=openstackansibleee -f --max-log-requests 10

    If the oc logs command returns an error similar to the following error, increase the --max-log-requests value:

    error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limit
  5. Verify that the data plane is deployed:

    $ oc get openstackdataplanedeployment -n openstack
    NAME             	STATUS   MESSAGE
    data-plane-deploy   True     Setup Complete
    
    
    $ oc get openstackdataplanenodeset -n openstack
    NAME             	STATUS   MESSAGE
    openstack-data-plane   True     NodeSet Ready

    For information about the meaning of the returned status, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift

    If the status indicates that the data plane has not been deployed, then troubleshoot the deployment. For information, see Troubleshooting the data plane creation and deployment in the Deploying Red Hat OpenStack Services on OpenShift guide.

Chapter 6. Managing project networks

Project networks help you to isolate network traffic for cloud computing. Steps to create a project network include planning and creating the network, and adding subnets and routers.

6.1. VLAN planning

When you plan for VLANs in your Red Hat OpenStack Services on OpenShift (RHOSO) environment, you start with a number of subnets, from which you allocate individual IP addresses. When you use multiple subnets you can segregate traffic between systems into VLANs.

For example, it is ideal that your management or API traffic is not on the same network as systems that serve web traffic. Traffic between VLANs travels through a router where you can implement firewalls to govern traffic flow.

You must plan your VLANs as part of your overall plan that includes traffic isolation, high availability, and IP address utilization for the various types of virtual networking resources in your deployment.

Red Hat OpenStack Services on OpenShift (RHOSO) requires the following physical data center networks.

Control plane network
Used by the OpenStack Operator for Ansible SSH access to deploy and connect to the data plane nodes from the Red Hat OpenShift Container Platform (RHOCP) environment. This network is also used by data plane nodes for live migration of instances.
Designate network
Used internally by the RHOSO DNS service (designate) to manage the DNS servers. For more information, see Designate networks in Configuring DNS as a service.
Designateext network
Used to provide external access to the DNS service resolver and the DNS servers.
External network

An optional network that is used when required for your environment. For example, you might create an external network for any of the following purposes:

  • To provide virtual machine instances with Internet access.
  • To create flat provider networks that are separate from the control plane.
  • To configure VLAN provider networks on a separate bridge from the control plane.
  • To provide access to virtual machine instances with floating IPs on a network other than the control plane network.

    Note

    When an external network is used for workloads, an OVN gateway is required in some use cases. For more information, see on use cases and available options, see Configuring a control plane OVN gateway with a dedicated NIC in Configuring networking services.

Internal API network
Used for internal communication between RHOSO components.
Octavia network
Used to connect Load-balancing service (octavia) controllers running in the control plane. For more information, see Octavia network in Configuring load balancing as a service.
Storage network
Used for block storage, RBD, NFS, FC, and iSCSI.
Storage Management network

An optional network that is used by storage components. For example, Red Hat Ceph Storage uses the Storage Management network in a hyperconverged infrastructure (HCI) environment as the cluster_network to replicate data.

Note

For more information about Red Hat Ceph Storage network configuration, see "Ceph network configuration" in the Red Hat Ceph Storage Configuration Guide:

Tenant (project) network
Used for data communication between virtual machine instances within the cloud deployment.

Figure 6.1. Physical networks for RHOSO

Physical networks for RHOSO

The following table details the default networks used in a RHOSO deployment.

Note

By default, the control plane and external networks do not use VLANs. Networks that do not use VLANs must be placed on separate NICs. You can use a VLAN for the control plane network on new RHOSO deployments. You can also use the Native VLAN on a trunked interface as the non-VLAN network. For example, you can have the control plane and the internal API on one NIC, and the external network with no VLAN on a separate NIC.

Expand
Table 6.1. Default RHOSO networks
Network nameCIDRNetConfig allocationRangeMetalLB IPAddressPool rangenet-attach-def ipam rangeOCP worker nncp range

ctlplane

192.168.122.0/24

192.168.122.100 - 192.168.122.250

192.168.122.80 - 192.168.122.90

192.168.122.30 - 192.168.122.70

192.168.122.10 - 192.168.122.20

designate

172.26.0.0/24

n/a

n/a

172.26.0.30 - 172.26.0.70

172.26.0.10 - 172.26.0.20

designateext

172.34.0.0/24

n/a

172.34.0.80 - 172.34.0.120

172.34.0.30 - 172.34.0.70

172.34.0.10 - 172.34.0.20

external

10.0.0.0/24

10.0.0.100 - 10.0.0.250

n/a

n/a

n/a

internalapi

172.17.0.0/24

172.17.0.100 - 172.17.0.250

172.17.0.80 - 172.17.0.90

172.17.0.30 - 172.17.0.70

172.17.0.10 - 172.17.0.20

octavia

172.23.0.0/24

n/a

n/a

172.23.0.30 - 172.23.0.70

n/a

storage

172.18.0.0/24

172.18.0.100 - 172.18.0.250

n/a

172.18.0.30 - 172.18.0.70

172.18.0.10 - 172.18.0.20

storageMgmt

172.20.0.0/24

172.20.0.100 - 172.20.0.250

n/a

172.20.0.30 - 172.20.0.70

172.20.0.10 - 172.20.0.20

tenant

172.19.0.0/24

172.19.0.100 - 172.19.0.250

n/a

172.19.0.30 - 172.19.0.70

172.19.0.10 - 172.19.0.20

6.3. IP address consumption

In Red Hat OpenStack Services on OpenShift (RHOSO) environments the following systems consume IP addresses from your allocated range:

  • Physical nodes - Each physical NIC requires one IP address. It is common practice to dedicate physical NICs to specific functions. For example, allocate management and NFS traffic to distinct physical NICs, sometimes with multiple NICs connecting across to different switches for redundancy purposes.
  • Virtual IPs (VIPs) for High Availability - Plan to allocate between one and three VIPs for each network that controller nodes share.

6.4. Virtual networking

The following virtual resources consume IP addresses in OpenStack Networking in Red Hat OpenStack Services on OpenShift (RHOSO) environments. These resources are considered local to the cloud infrastructure, and do not need to be reachable by systems in the external physical network:

  • Project networks - Each project network requires a subnet that it can use to allocate IP addresses to instances.
  • Virtual routers - Each router interface plugging into a subnet requires one IP address.
  • Instances - Each instance requires an address from the project subnet that hosts the instance. If you require ingress traffic, you must allocate a floating IP address to the instance from the designated external network.
  • Management traffic - Includes OpenStack Services and API traffic. All services share a small number of VIPs. API, RPC and database services communicate on the internal API VIP.

6.5. Example network plan

This example shows a number of networks in a Red Hat OpenStack Services on OpenShift (RHOSO) environment that accommodate multiple subnets, with each subnet being assigned a range of IP addresses:

Example subnet plan
Expand
Subnet nameAddress rangeNumber of addressesSubnet Mask

Provisioning network

192.168.100.1 - 192.168.100.250

250

255.255.255.0

Internal API network

172.16.1.10 - 172.16.1.250

241

255.255.255.0

Storage

172.16.2.10 - 172.16.2.250

241

255.255.255.0

Storage Management

172.16.3.10 - 172.16.3.250

241

255.255.255.0

Tenant network (Geneve/VLAN)

172.16.4.10 - 172.16.4.250

241

255.255.255.0

External network (incl. floating IPs)

10.1.2.10 - 10.1.3.222

469

255.255.254.0

Provider network (infrastructure)

10.10.3.10 - 10.10.3.250

241

255.255.252.0

6.6. Working with subnets

In Red Hat OpenStack Services on OpenShift (RHOSO) environments use subnets to grant network connectivity to instances. A subnet is a pool of IP addresses. Instances are assigned to a Networking service (neutron) network. One network can have multiple subnets, and you can also add IP addresses from multiple subnets to the port.

You can create subnets only in pre-existing networks. Remember that project networks in the Networking service can host multiple subnets. This is useful if you intend to host distinctly different systems in the same network, and prefer a measure of isolation between them.

You can lessen network latency and load by grouping systems in the same subnet that require a high volume of traffic between each other.

6.7. Configuring floating IP port forwarding

In Red Hat OpenStack Services on OpenShift (RHOSO) environments, to enable users to set up port forwarding for floating IPs, you must enable the Networking service (neutron) port_forwarding service plug-in.

Prerequisites

  • You have the oc command line tool installed on your workstation.
  • You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.
  • The port_forwarding service plug-in requires that you also set the ovn-router service plug-in.

Procedure

  • Update the control plane:

    $ oc patch -n openstack openstackcontrolplane openstack-galera-network-isolation --type=merge --patch "
    ---
    spec:
      neutron:
        template:
          customServiceConfig: |
            [default]
            service_plugins=ovn-router,port_forwarding
    "
    Note

    The port_forwarding service plug-in requires that you also set the router service plug-in.

    RHOSO users can now set up port forwarding for floating IPs.

Verification

  1. Access the remote shell for the OpenStackClient pod from your workstation:

    $ oc rsh -n openstack openstackclient
  2. Ensure that the Networking service has successfully loaded the port_forwarding and router service plug-ins:

    $ openstack extension list --network -c Name -c Alias --max-width 74 | \
    grep -i -e 'Neutron L3 Router' -i -e floating-ip-port-forwarding \
    --os-cloud <cloud_name>
    • Replace <cloud_name> with the name of the cloud on which you are running the command.

      Sample output

      A successful verification produces output similar to the following:

      | Floating IP Port Forwarding       | floating-ip-port-forwarding        |
      | Neutron L3 Router                 | router                             |
  3. Exit the openstackclient pod:

    $ exit

6.8. Bridging the physical network

In Red Hat OpenStack Services on OpenShift (RHOSO) environments you can bridge your virtual network to the physical network to enable connectivity to and from virtual instances.

In this procedure, the example physical interface, eth0, is mapped to the bridge, br-ex; the virtual bridge acts as the intermediary between the physical network and any virtual networks.

As a result, all traffic traversing eth0 uses the configured Open vSwitch to reach instances.

To map a physical NIC to the virtual Open vSwitch bridge, complete the following steps:

Procedure

  1. Open /etc/sysconfig/network-scripts/ifcfg-eth0 in a text editor, and update the following parameters with values appropriate for the network at your site:

    • IPADDR
    • NETMASK GATEWAY
    • DNS1 (name server)

      Here is an example:

      DEVICE=eth0
      TYPE=OVSPort
      DEVICETYPE=ovs
      OVS_BRIDGE=br-ex
      ONBOOT=yes
  2. Open /etc/sysconfig/network-scripts/ifcfg-br-ex in a text editor and update the virtual bridge parameters with the IP address values that were previously allocated to eth0:

    DEVICE=br-ex
    DEVICETYPE=ovs
    TYPE=OVSBridge
    BOOTPROTO=static
    IPADDR=192.168.120.10
    NETMASK=255.255.255.0
    GATEWAY=192.168.120.1
    DNS1=192.168.120.1
    ONBOOT=yes

    You can now assign floating IP addresses to instances and make them available to the physical network.

You can offer varying service levels for VM instances by using quality of service (QoS) policies to apply rate limits to egress and ingress traffic in Red Hat OpenStack Services on OpenShift (RHOSO) environments.

You can apply QoS policies to individual ports, or apply QoS policies to a project network, where ports with no specific policy attached inherit the policy.

Note

Internal network owned ports, such as DHCP and internal router ports, are excluded from network policy application.

You can apply, modify, or remove QoS policies dynamically. However, for guaranteed minimum bandwidth QoS policies, you can only apply modifications when there are no instances that use any of the ports the policy is assigned to.

7.1. QoS rules

You can configure the following rule types to define a quality of service (QoS) policy in the Red Hat OpenStack Services on OpenShift (RHOSO) Networking service (neutron):

Minimum bandwidth (minimum_bandwidth)
Provides minimum bandwidth constraints on certain types of traffic. If implemented, best efforts are made to provide no less than the specified bandwidth to each port on which the rule is applied.
Bandwidth limit (bandwidth_limit)
Provides bandwidth limitations on networks, ports, floating IPs (FIPs), and router gateway IPs. If implemented, any traffic that exceeds the specified rate is dropped.
DSCP marking (dscp_marking)
Marks network traffic with a differentiated services code point (DSCP) value.
Minimum packet rate (minimum-packet-rate)
Provides minimum rate of packet transmission constraints on certain types of traffic. If implemented, best efforts are made to provide no less than the specified rate of packet transmission to each port on which the rule is applied. Currently, only placement enforcement is supported.

QoS policies can be enforced in various contexts, including virtual machine instance placements, floating IP assignments, and gateway IP assignments.

Depending on the enforcement context and on the mechanism driver you use, a QoS rule affects egress traffic (upload from instance), ingress traffic (download to instance), or both.

Note

In ML2/OVN deployments, you can enable minimum bandwidth and bandwidth limit egress policies for hardware offloaded ports. You cannot enable ingress policies for hardware offloaded ports. For more information, see Section 7.2, “Configuring the Networking service for QoS policies”.

Expand
Table 7.1. Supported traffic direction by driver (all QoS rule types)

Rule [1]

Supported traffic direction by mechanism driver

ML2/SR-IOV

ML2/OVN

Minimum bandwidth

Egress only

Egress only [2]

Bandwidth limit

Egress only [3]

Egress and ingress

DSCP marking

N/A

Egress only [4]

[1] RHOSO does not support QoS for trunk ports.

[2] In ML2/OVN deployments, minimum bandwidth rules are enforced in the physical device. You cannot configure this enforcement on bond interfaces.

[3] The mechanism drivers ignore the max-burst-kbits parameter because they do not support it.

[4] ML2/OVN does not support DSCP marking on tunneled protocols.

Expand
Table 7.2. Supported traffic direction by driver for placement reporting and scheduling (minimum bandwidth only)

Enforcement type

Supported traffic by direction mechanism driver

ML2/SR-IOV

ML2/OVN

Placement

Egress and ingress

Technology preview [1]

[1] See OSPRH-507.

Expand
Table 7.3. Supported traffic direction by driver for enforcement types (bandwidth limit only)

Enforcement type

Supported traffic direction by mechanism driver

ML2/OVN

Floating IP

Egress and ingress

Gateway IP

Egress and ingress

The quality of service feature in the Red Hat OpenStack Services on OpenShift (RHOSO) Networking service (neutron) is provided through the qos service plug-in. With the ML2/OVN mechanism driver, qos is loaded by default. However, this is not true for ML2/SR-IOV.

When using the qos service plug-in with the ML2/SR-IOV mechanism driver, you must also load the qos extension on their respective agents.

The following list summarizes the tasks that you must perform to configure the Networking service for QoS. The task details follow this list:

  • For all types of QoS policies:

    • Add the qos service plug-in.
    • Add qos extension for the agents (SR-IOV only).
  • In ML2/OVN deployments, you can enable minimum bandwidth and bandwidth limit egress policies for hardware offloaded ports. You cannot enable ingress policies for hardware offloaded ports.
  • Additional tasks for scheduling VM instances using minimum bandwidth policies only:

    • Specify the hypervisor name if it differs from the name that the Compute service (nova) uses.
    • Configure the resource provider ingress and egress bandwidths for the relevant agents on each Compute node.
    • (Optional) Mark vnic_types as not supported.
  • Additional task for DSCP marking policies:

    • Enable edpm_ovn_encap_tos. By default, edpm_ovn_encap_tos is disabled.

Prerequisites

  • You have the oc command line tool installed on your workstation.
  • You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.

Procedure

  1. If you are using the ML2/SR-IOV mechanism driver, you must enable the qos agent extension on the Compute nodes, also referred to as the RHOSO data plane.

    For more information, see Configuring the Networking service for QoS policies for SR-IOV.

  2. Add the required QoS configuration. Place the configuration in the edpm_network_config_template under ansibleVars:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: my-data-plane-node-set
    spec:
      ...
      nodeTemplate:
        ...
        ansible:
          ansibleVars:
            edpm_network_config_template: |
              ---
              OvnHardwareOffloadedQos: true
              ...
  3. If you want to create DSCP marking policies, add edpm_ovn_encap_tos: '1' under ansibleVars:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: my-data-plane-node-set
    spec:
      ...
      nodeTemplate:
        ...
        ansible:
          ansibleVars:
            edpm_network_config_template: |
              ---
              OvnHardwareOffloadedQos: true
            edpm_ovn_encap_tos: 1
              ...

    When edpm_ovn_encap_tos is enabled (has a value of 1), the Networking service copies the DSCP value of the inner header to the outer header. The default is 0.

  4. Save the OpenStackDataPlaneNodeSet CR definition file.
  5. Apply the updated OpenStackDataPlaneNodeSet CR configuration:

    $ oc apply -f my_data_plane_node_set.yaml
  6. Verify that the data plane resource has been updated:

    $ oc get openstackdataplanenodeset
    Sample output
    NAME                     STATUS MESSAGE
    my-data-plane-node-set   False  Deployment not started
  7. Create a file on your workstation to define the OpenStackDataPlaneDeployment CR, for example, my_data_plane_deploy.yaml:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: my-data-plane-deploy
    Tip

    Give the definition file and the OpenStackDataPlaneDeployment CR a unique and descriptive name that indicates the purpose of the modified node set.

  8. Add the OpenStackDataPlaneNodeSet CR that you modified:

    spec:
      nodeSets:
        - my-data-plane-node-set
  9. Save the OpenStackDataPlaneDeployment CR deployment file.
  10. Deploy the modified OpenStackDataPlaneNodeSet CR:

    $ oc create -f my_data_plane_deploy.yaml -n openstack

    You can view the Ansible logs while the deployment executes:

    $ oc get pod -l app=openstackansibleee -n openstack -w
    
    $ oc logs -l app=openstackansibleee -n openstack -f \
    --max-log-requests 10
  11. Verify that the modified OpenStackDataPlaneNodeSet CR is deployed:

    Example
    $ oc get openstackdataplanedeployment -n openstack
    Sample output
    NAME                     STATUS   MESSAGE
    my-data-plane-node-set   True     Setup Complete
  12. Repeat the oc get command until you see the NodeSet Ready message:

    Example
    $ oc get openstackdataplanenodeset -n openstack
    Sample output
    NAME                     STATUS   MESSAGE
    my-data-plane-node-set   True     NodeSet Ready

    For information on the meaning of the returned status, see Data plane conditions and states in the Deploying Red Hat OpenStack Services on OpenShift guide.

Verification

  • Confirm that the qos service plug-in is loaded:

    $ openstack network qos policy list

    If the qos service plug-in is loaded, then you do not receive a ResourceNotFound error.

The quality of service feature in the Red Hat OpenStack Services on OpenShift (RHOSO) Networking service (neutron) is provided through the qos service plug-in. If your Networking service ML2 mechanism driver is SR-IOV, then you must also load the qos extension driver for the NIC switch agent, neutron-sriov-nic-agent, which runs on the Compute nodes, also referred to as the RHOSO data plane.

Prerequisites

  • You have the oc command line tool installed on your workstation.
  • You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.

Procedure

  1. Open the OpenStackDataPlaneNodeSet CR definition file for the node set you want to update, for example, my_data_plane_node_set.yaml.
  2. Add the required QoS configuration, NeutronSriovAgentExtensions: "qos".

    Place the configuration in the edpm_network_config_template under ansibleVars:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: my-data-plane-node-set
    spec:
      ...
      nodeTemplate:
        ...
        ansible:
          ansibleVars:
            edpm_network_config_template: |
              ---
              NeutronSriovAgentExtensions: "qos"
              ...
  3. Save the OpenStackDataPlaneNodeSet CR definition file.
  4. Apply the updated OpenStackDataPlaneNodeSet CR configuration:

    $ oc apply -f my_data_plane_node_set.yaml
  5. Verify that the data plane resource has been updated:

    $ oc get openstackdataplanenodeset
    Sample output
    NAME                     STATUS MESSAGE
    my-data-plane-node-set   False  Deployment not started
  6. Create a file on your workstation to define the OpenStackDataPlaneDeployment CR, for example, my_data_plane_deploy.yaml:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: my-data-plane-deploy
    Tip

    Give the definition file and the OpenStackDataPlaneDeployment CR a unique and descriptive name that indicates the purpose of the modified node set.

  7. Add the OpenStackDataPlaneNodeSet CR that you modified:

    spec:
      nodeSets:
        - my-data-plane-node-set
  8. Save the OpenStackDataPlaneDeployment CR deployment file.
  9. Deploy the modified OpenStackDataPlaneNodeSet CR:

    $ oc create -f my_data_plane_deploy.yaml -n openstack

    You can view the Ansible logs while the deployment executes:

    $ oc get pod -l app=openstackansibleee -n openstack -w
    
    $ oc logs -l app=openstackansibleee -n openstack -f \
    --max-log-requests 10
  10. Verify that the modified OpenStackDataPlaneNodeSet CR is deployed:

    Example
    $ oc get openstackdataplanedeployment -n openstack
    Sample output
    NAME                     STATUS   MESSAGE
    my-data-plane-node-set   True     Setup Complete
  11. Repeat the oc get command until you see the NodeSet Ready message:

    Example
    $ oc get openstackdataplanenodeset -n openstack
    Sample output
    NAME                     STATUS   MESSAGE
    my-data-plane-node-set   True     NodeSet Ready

    For information on the meaning of the returned status, see Data plane conditions and states in the Deploying Red Hat OpenStack Services on OpenShift guide.

Verification

Confirm that the NIC switch agent, neutron-sriov-nic-agent, has loaded the qos extension.

  1. Obtain the UUID for the NIC switch agent:

    $ openstack network agent list
  2. With the neutron-sriov-nic-agent UUID, run the following command:

    $ openstack network agent show <uuid>
    Example
    $ openstack network agent show 8676ccb3-1de0-4ca6-8fb7-b814015d9e5f \
    --max-width 70
    Sample output

    You should see an agent object with a field called configuration. When the qos extension is loaded, the extensions field should contain qos in its list.

    -------------------------------------------------------------------+
    | Field             | Value                                          |
    -------------------------------------------------------------------+
    | admin_state_up    | UP                                             |
    | agent_type        | NIC Switch agent                               |
    | alive             | :-)                                            |
    | availability_zone | None                                           |
    | binary            | neutron-sriov-nic-agent                        |
    | configuration     | {device_mappings: {}, devices: 0, extensi | | | ons: [qos], resource_provider_bandwidths: |
    |                   | {}, resource_provider_hypervisors: {}, reso | | | urce_provider_inventory_defaults: {allocatio | | | n_ratio: 1.0, min_unit: 1, step_size: 1,  |
    |                   | reserved: 0}}                                |
    | created_at        | 2024-08-08 08:22:57                            |
    | description       | None                                           |
    | ha_state          | None                                           |
    | host              | edpm-compute-0.ctlplane.example.com            |
    | id                | 8676ccb3-1de0-4ca6-8fb7-b814015d9e5f           |
    | last_heartbeat_at | 2024-08-08 08:24:27                            |
    | resources_synced  | None                                           |
    | started_at        | 2024-08-08 08:22:57                            |
    | topic             | N/A                                            |
    -------------------------------------------------------------------+

Chapter 8. Configuring RBAC policies

In Red Hat OpenStack Services on OpenShift (RHOSO) environments, use role-based access control (RBAC) policies in the Networking service (neutron) to control which projects can attach instances to a network and access resources like QoS policies, security groups, address scopes, subnet pools, and address groups.

Important

Networking service RBAC is separate from secure role-based access control (SRBAC) that the Identity service (keystone) uses in RHOSO.

8.1. Creating RBAC policies

This example procedure demonstrates how to use a Networking service (neutron) role-based access control (RBAC) policy to grant a project access to a shared network in a Red Hat OpenStack Services on OpenShift (RHOSO) environment.

Prerequisites

  • The administrator has created a project for you and has provided you with a clouds.yaml file for you to access the cloud.
  • The python-openstackclient package resides on your workstation.

    $ dnf list installed python-openstackclient

Procedure

  1. Confirm that the system OS_CLOUD variable is set for your cloud:

    $ echo $OS_CLOUD
    my_cloud

    Reset the variable if necessary:

    $ export OS_CLOUD=my_other_cloud

    As an alternative, you can specify the cloud name by adding the --os-cloud <cloud_name> option each time you run an openstack command.

  2. View the list of available networks:

    $ openstack network list
    +--------------------------------------+-------------+-------------------------------------------------------+
    | id                                   | name        | subnets                                               |
    +--------------------------------------+-------------+-------------------------------------------------------+
    | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | web-servers | 20512ffe-ad56-4bb4-b064-2cb18fecc923 192.168.200.0/24 |
    | bcc16b34-e33e-445b-9fde-dd491817a48a | private     | 7fe4a05a-4b81-4a59-8c47-82c965b0e050 10.0.0.0/24      |
    | 9b2f4feb-fee8-43da-bb99-032e4aaf3f85 | public      | 2318dc3b-cff0-43fc-9489-7d4cf48aaab9 172.24.4.224/28  |
    +--------------------------------------+-------------+-------------------------------------------------------+
  3. View the list of projects:

    $ openstack project list
    +----------------------------------+----------+
    | ID                               | Name     |
    +----------------------------------+----------+
    | 4b0b98f8c6c040f38ba4f7146e8680f5 | auditors |
    | 519e6344f82e4c079c8e2eabb690023b | services |
    | 80bf5732752a41128e612fe615c886c6 | demo     |
    | 98a2f53c20ce4d50a40dac4a38016c69 | admin    |
    +----------------------------------+----------+
  4. Create a RBAC entry for the web-servers network that grants access to the auditors project (4b0b98f8c6c040f38ba4f7146e8680f5):

    $ openstack network rbac create --type network --target-project 4b0b98f8c6c040f38ba4f7146e8680f5 --action access_as_shared web-servers
    Sample output
    +----------------+--------------------------------------+
    | Field          | Value                                |
    +----------------+--------------------------------------+
    | action         | access_as_shared                     |
    | id             | 314004d0-2261-4d5e-bda7-0181fcf40709 |
    | object_id      | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 |
    | object_type    | network                              |
    | target_project | 4b0b98f8c6c040f38ba4f7146e8680f5     |
    | project_id     | 98a2f53c20ce4d50a40dac4a38016c69     |
    +----------------+--------------------------------------+

    As a result, users in the auditors project can connect instances to the web-servers network.

8.2. Reviewing RBAC policies

This example procedure demonstrates how to obtain information about a Networking service (neutron) role-based access control (RBAC) policy used to grant a project access to a shared network in a Red Hat OpenStack Services on OpenShift (RHOSO) environment.

Prerequisites

  • The administrator has created a project for you and has provided you with a clouds.yaml file for you to access the cloud.
  • The python-openstackclient package resides on your workstation.

    $ dnf list installed python-openstackclient

Procedure

  1. Confirm that the system OS_CLOUD variable is set for your cloud:

    $ echo $OS_CLOUD
    my_cloud

    Reset the variable if necessary:

    $ export OS_CLOUD=my_other_cloud

    As an alternative, you can specify the cloud name by adding the --os-cloud <cloud_name> option each time you run an openstack command.

  2. Run the openstack network rbac list command to retrieve the ID of your existing role-based access control (RBAC) policies:

    $ openstack network rbac list
    Sample output
    +--------------------------------------+-------------+--------------------------------------+
    | id                                   | object_type | object_id                            |
    +--------------------------------------+-------------+--------------------------------------+
    | 314004d0-2261-4d5e-bda7-0181fcf40709 | network     | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 |
    | bbab1cf9-edc5-47f9-aee3-a413bd582c0a | network     | 9b2f4feb-fee8-43da-bb99-032e4aaf3f85 |
    +--------------------------------------+-------------+--------------------------------------+
  3. Run the openstack network rbac-show command to view the details of a specific RBAC entry:

    $ openstack network rbac show 314004d0-2261-4d5e-bda7-0181fcf40709
    Sample output
    +----------------+--------------------------------------+
    | Field          | Value                                |
    +----------------+--------------------------------------+
    | action         | access_as_shared                     |
    | id             | 314004d0-2261-4d5e-bda7-0181fcf40709 |
    | object_id      | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 |
    | object_type    | network                              |
    | target_project | 4b0b98f8c6c040f38ba4f7146e8680f5     |
    | project_id     | 98a2f53c20ce4d50a40dac4a38016c69     |
    +----------------+--------------------------------------+

8.3. Deleting RBAC policies

This example procedure demonstrates how to remove a Networking service (neutron) role-based access control (RBAC) policy that grants a project access to a shared network in a Red Hat OpenStack Services on OpenShift (RHOSO) environment.

Prerequisites

  • The administrator has created a project for you and has provided you with a clouds.yaml file for you to access the cloud.
  • The python-openstackclient package resides on your workstation.

    $ dnf list installed python-openstackclient

Procedure

  1. Confirm that the system OS_CLOUD variable is set for your cloud:

    $ echo $OS_CLOUD
    my_cloud

    Reset the variable if necessary:

    $ export OS_CLOUD=my_other_cloud

    As an alternative, you can specify the cloud name by adding the --os-cloud <cloud_name> option each time you run an openstack command.

  2. Run the openstack network rbac list command to retrieve the ID of your existing role-based access control (RBAC) policies:

    # openstack network rbac list
    +--------------------------------------+-------------+--------------------------------------+
    | id                                   | object_type | object_id                            |
    +--------------------------------------+-------------+--------------------------------------+
    | 314004d0-2261-4d5e-bda7-0181fcf40709 | network     | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 |
    | bbab1cf9-edc5-47f9-aee3-a413bd582c0a | network     | 9b2f4feb-fee8-43da-bb99-032e4aaf3f85 |
    +--------------------------------------+-------------+--------------------------------------+
  3. Run the openstack network rbac delete command to delete the RBAC, using the ID of the RBAC that you want to delete:

    # openstack network rbac delete 314004d0-2261-4d5e-bda7-0181fcf40709
    Deleted rbac_policy: 314004d0-2261-4d5e-bda7-0181fcf40709

In a Red Hat OpenStack Services on OpenShift (RHOSO) environment, you can use a Networking service (neutron) role-based access control (RBAC) policy to grant a project access to external networks—​networks with gateway interfaces attached.

In the following example, a RBAC policy is created for the web-servers network and access is granted to the engineering project, c717f263785d4679b16a122516247deb:

Prerequisites

  • You have the oc command line tool installed on your workstation.
  • You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.

Procedure

  1. Access the remote shell for the OpenStackClient pod from your workstation:

    $ oc rsh -n openstack openstackclient
  2. Create a new RBAC policy using the --action access_as_external option:

    $ openstack network rbac create --type network --target-project c717f263785d4679b16a122516247deb --action access_as_external web-servers
    Sample output

    Created a new rbac_policy:

    +----------------+--------------------------------------+
    | Field          | Value                                |
    +----------------+--------------------------------------+
    | action         | access_as_external                   |
    | id             | ddef112a-c092-4ac1-8914-c714a3d3ba08 |
    | object_id      | 6e437ff0-d20f-4483-b627-c3749399bdca |
    | object_type    | network                              |
    | target_project | c717f263785d4679b16a122516247deb     |
    | project_id     | c717f263785d4679b16a122516247deb     |
    +----------------+--------------------------------------+

    As a result, users in the engineering project are able to view the network or connect instances to it:

    $ openstack network list
    +--------------------------------------+-------------+------------------------------------------------------+
    | id                                   | name        | subnets                                              |
    +--------------------------------------+-------------+------------------------------------------------------+
    | 6e437ff0-d20f-4483-b627-c3749399bdca | web-servers | fa273245-1eff-4830-b40c-57eaeac9b904 192.168.10.0/24 |
    +--------------------------------------+-------------+------------------------------------------------------+
  3. Exit the openstackclient pod:

    $ exit

Chapter 9. Common administrative networking tasks

Sometimes you need to perform administration tasks on the Red Hat OpenStack Services on OpenShift (RHOSO) Networking service (neutron) such as specifying the name assigned to ports by the internal DNS.

9.1. Configuring shared security groups

When you want one or more projects to be able to share data in a Red Hat OpenStack Services on OpenShift (RHOSO) environment, you can use the Networking service (neutron) RBAC policy feature to share a security group. You create security groups and Networking service role-based access control (RBAC) policies using the OpenStack Client.

You can apply a security group directly to an instance during instance creation, or to a port on the running instance.

Note

You cannot apply a role-based access control (RBAC)-shared security group directly to an instance during instance creation. To apply an RBAC-shared security group to an instance you must first create the port, apply the shared security group to that port, and then assign that port to the instance. See Adding a security group to a port in Creating and managing instances.

Prerequisites

  • You have at least two RHOSO projects that you want to share.
  • In one of the projects, the current project, you have created a security group that you want to share with another project, the target project.

    In this example, the ping_ssh security group is created:

    Example
    $ openstack security group create ping_ssh
  • You have the oc command line tool installed on your workstation.
  • You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.

Procedure

  1. Access the remote shell for the OpenStackClient pod from your workstation:

    $ oc rsh -n openstack openstackclient
  2. Obtain the names or IDs of the project that contains the security group and the target project.

    $ openstack project list
  3. Obtain the name or ID of the security group that you want to share between RHOSO projects.

    $ openstack security group list
  4. Using the identifiers from the previous steps, create an RBAC policy using the openstack network rbac create command.

    In this example, the ID of the target project is 32016615de5d43bb88de99e7f2e26a1e. The ID of the security group is 5ba835b7-22b0-4be6-bdbe-e0722d1b5f24:

    Example
    $ openstack network rbac create --target-project \
    32016615de5d43bb88de99e7f2e26a1e --action access_as_shared \
    --type security_group 5ba835b7-22b0-4be6-bdbe-e0722d1b5f24
    --target-project

    specifies the project that requires access to the security group.

    Tip

    You can share data between all projects by using the --target-all-projects argument instead of --target-project <target_project>. By default, only the admin user has this privilege.

    --action access_as_shared
    specifies what the project is allowed to do.
    --type
    indicates that the target object is a security group.
    5ba835b7-22b0-4be6-bdbe-e0722d1b5f24

    is the ID of the particular security group which is being granted access to.

    The target project is able to access the security group when running the OpenStack Client security group commands, in addition to being able to bind to its ports. No other users (other than administrators and the owner) are able to access the security group.

    Tip

    To remove access for the target project, delete the RBAC policy that allows it using the openstack network rbac delete command.

  5. Exit the openstackclient pod:

    $ exit

9.2. Specifying the name that DNS assigns to ports

In Red Hat OpenStack Services on OpenShift (RHOSO) environments, you can specify the name assigned to ports by the internal DNS. You enable this functionality in the Networking service (neutron), by loading the ML2 extension driver, DNS domain for ports, dns_domain_ports.

After loading the driver, you can use the OpenStack Client port commands, port set or port create, with --dns-name to assign a port name.

Important

You must enable the DNS domain for ports extension (dns_domain_ports) for DNS to internally resolve names for ports in your RHOSO environment. Using the NeutronDnsDomain default value, openstacklocal, means that the Networking service does not internally resolve port names for DNS.

Also, when the DNS domain for ports extension is enabled, the Compute service automatically populates the dns_name attribute with the hostname attribute of the instance during the boot of VM instances. At the end of the boot process, dnsmasq recognizes the allocated ports by their instance hostname.

Prerequisites

  • You have the oc command line tool installed on your workstation.
  • You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.

Procedure

  • Update the control plane with the key value pair, service_plugins=dns_domain_ports:

    $ oc patch -n openstack openstackcontrolplane openstack-galera-network-isolation --type=merge --patch "
    ---
    spec:
      neutron:
        template:
          customServiceConfig: |
            [ml2]
            extension_drivers=dns_domain_ports
    "
    Note

    If you set dns_domain_ports, ensure that the deployment does not also use dns_domain, the DNS Integration extension. These extensions are incompatible, and both extensions cannot be defined simultaneously.

    RHOSO users can now set up port forwarding for floating IPs.

Verification

  1. Access the remote shell for the OpenStackClient pod from your workstation:

    $ oc rsh -n openstack openstackclient
  2. Confirm that the Networking service has successfully loaded the dns_domain_ports ML2 extension driver:

    $ openstack extension list --network --max-width 75 | \
    grep dns-domain-ports --os-cloud <cloud_name>
    • Replace <cloud_name> with the name of the cloud on which you are running the command.

      Sample output

      A successful verification produces output similar to the following:

      | dns_domain for ports
      | dns-domain-ports    | Allows the DNS domain to be specified for a network
      port.
  3. Create a new port (new_port) on a network (public). Assign a DNS name (my_port) to the port.

    Example
    $ openstack port create --network public --dns-name my_port new_port
  4. Display the details for your port (new_port).

    Example
    $ openstack port show -c dns_assignment -c dns_domain -c dns_name -c name new_port
    Sample output
    +-------------------------+----------------------------------------------+
    | Field                   | Value                                        |
    +-------------------------+----------------------------------------------+
    | dns_assignment          | fqdn='my_port.example.com',                  |
    |                         | hostname='my_port',                          |
    |                         | ip_address='10.65.176.113'                   |
    | dns_domain              | example.com                                  |
    | dns_name                | my_port                                      |
    | name                    | new_port                                     |
    +-------------------------+----------------------------------------------+

    Under dns_assignment, the fully qualified domain name (fqdn) value for the port contains a concatenation of the DNS name (my_port) and the domain name (example.com) that you set earlier with NeutronDnsDomain.

  5. Create a new VM instance (my_vm) using the port (new_port) that you just created.

    Example
    $ openstack server create --image rhel --flavor m1.small --port new_port my_vm
  6. Display the details for your port (new_port).

    Example
    $ openstack port show -c dns_assignment -c dns_domain -c dns_name -c name new_port
    Sample output
    +
    ----
    +-------------------------+----------------------------------------------+
    | Field                   | Value                                        |
    +-------------------------+----------------------------------------------+
    | dns_assignment          | fqdn='my_vm.example.com',                    |
    |                         | hostname='my_vm',                            |
    |                         | ip_address='10.65.176.113'                   |
    | dns_domain              | example.com                                  |
    | dns_name                | my_vm                                        |
    | name                    | new_port                                     |
    +-------------------------+----------------------------------------------+
    ----

    + Note that the Compute service changes the dns_name attribute from its original value (my_port) to the name of the instance with which the port is associated (my_vm).

  7. Exit the openstackclient pod:

    $ exit

9.3. Enabling NUMA affinity on ports

In Red Hat OpenStack Services on OpenShift (RHOSO) environments, to enable users to create instances with NUMA affinity on the port, you must load the Networking service (neutron) ML2 extension driver, NUMA port affinity policy, port_numa_affinity_policy.

Prerequisites

  • You have the oc command line tool installed on your workstation.
  • You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.

Procedure

  • Update the control plane with the key value pair, extension_drivers=port_numa_affinity_policy:

    $ oc patch -n openstack openstackcontrolplane openstack-galera-network-isolation --type=merge --patch "
    ---
    spec:
      neutron:
        template:
          customServiceConfig: |
            [ml2]
            extension_drivers=port_numa_affinity_policy
    "

Verification

  1. Access the remote shell for the OpenStackClient pod from your workstation:

    $ oc rsh -n openstack openstackclient
  2. Confirm that the Networking service has successfully loaded the port_numa_affinity_policy ML2 extension driver:

    $ openstack extension list --network --max-width 74 | \
    grep port-numa-affinity-policy --os-cloud <cloud_name>
    • Replace <cloud_name> with the name of the cloud on which you are running the command.

      Sample output

      A successful verification produces output similar to the following:

      | Port NUMA affinity policy
      | port-numa-affinity-policy               | Expose the port NUMA affinity
      policy
  3. Create a new port.

    When you create a port, use one of the following options to specify the NUMA affinity policy to apply to the port:

    • --numa-policy-required - NUMA affinity policy required to schedule this port.
    • --numa-policy-preferred - NUMA affinity policy preferred to schedule this port.
    • --numa-policy-legacy - NUMA affinity policy using legacy mode to schedule this port.

      Example
      $ openstack port create --network public \
        --numa-policy-legacy  myNUMAAffinityPort
  4. Display the details for your port.

    Example
    $ openstack port show myNUMAAffinityPort -c numa_affinity_policy
    Sample output

    When the extension is loaded, the Value column should read, legacy, preferred or required. If the extension has failed to load, Value reads None:

    +----------------------+--------+
    | Field                | Value  |
    +----------------------+--------+
    | numa_affinity_policy | legacy |
    +----------------------+--------+
  5. Exit the openstackclient pod:

    $ exit

9.4. Limiting queries to the metadata service

To protect Red Hat OpenStack Services on OpenShift (RHOSO) environments against cyber threats such as denial of service (DoS) attacks, the Networking service (neutron) offers administrators the ability to limit the rate at which VM instances can query the Compute metadata service. Administrators do this by assigning values to a set of parameters that the Networking service uses to configure HAProxy servers to perform the rate limiting. The HAProxy servers run inside the metadata service.

To add metadata rate limiting for a node set, complete these tasks:

  1. Create a ConfigMap custom resource (CR) to configure the nodes.
  2. Create a custom service for the feature that runs the playbook for the service.
  3. Include the ConfigMap CR in the custom service.

A detailed procedure follows.

Prerequisites

  • You have the oc command line tool installed on your workstation.
  • You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.
  • Your RHOSP environment uses IPv4 networking.

    Currently, the Networking service does not support metadata rate limiting on IPv6 networks.

  • You have a scheduled maintenance window.

    This procedure requires you to restart the OVN metadata service.

Procedure

  1. Create a ConfigMap CR that defines a new configuration for metadata rate limiting, and save it to a YAML file on your workstation, for example, neutron-metadata-rate-limit.yaml.

    Note

    Do not use the name of the default configuration file, because it would override the infrastructure configuration, such as the transport_url.

    Set values for the following rate limiting parameters:

    rate_limit_enabled
    enables you to limit the rate of metadata requests. The default value is false. Set the value to true to enable metadata rate limiting.
    ip_versions
    the IP version, 4, used for metadata IP addresses on which you want to control query rates. RHOSP does not yet support metadata rate limiting for IPv6 networks.
    base_window_duration
    the time span, in seconds, during which query requests are limited. The default value is 10 seconds.
    base_query_rate_limit
    the maximum number of requests allowed during the base_window_duration. The default value is 10 requests.
    burst_window_duration
    the time span, in seconds, that a request rate higher than the base_window_duration is allowed. The default value is 10 seconds.
    burst_query_rate_limit
    the maximum number of requests allowed during the burst_window_duration. The default value is 10 requests.
    Example

    In this example, the Networking service is configured for a base time and rate that allows instances to query the IPv4 metadata service IP address 6 times over a 60 second period. The Networking service is also configured for a burst time and rate that allows a higher rate of 2 queries during shorter periods of 10 seconds each:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: neutron-metadata-rate-limit
    data:
       20-neutron-metadata-rate.conf: |
         [metadata_rate_limiting]
         rate_limit_enabled = True
         ip_versions = 4
         base_window_duration = 60
         base_query_rate_limit = 6
         burst_window_duration = 10
         burst_query_rate_limit = 2
         ...
  2. Create the ConfigMap object by using the ConfigMap CR file.

    Example
    $ oc create -f neutron-metadata-rate-limit.yaml -n openstack
  3. Create an OpenStackDataPlaneService CR that defines the metadata rate limit custom service, and save it to a YAML file on your workstation, for example neutron-metadata-rate-limit-service.yaml:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneService
    metadata:
      name: neutron-metadata-rate-limit
  4. Add the ConfigMap CRs to the custom service, and specify the Secret CR for the cell that the node set that runs this service connects to:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneService
    metadata:
      name: neutron-metadata-rate-limit
    spec:
      dataSources:
        - configMapRef:
            name: neutron-metadata-rate-limit
        - secretRef:
            name: neutron-ovn-metadata-agent-neutron-config
        - secretRef:
            name: nova-metadata-neutron-config
        - configMapRef:
            name: neutron-metadata-rate-limit
      tlsCerts:
        default:
          contents:
          - dnsnames
          - ips
          networks:
          - ctlplane
          issuer: osp-rootca-issuer-ovn
          keyUsages:
            - digital signature
            - key encipherment
            - client auth
      caCerts: combined-ca-bundle
      containerImageFields:
      - EdpmNeutronMetadataAgentImage
  5. Specify the Ansible commands to create the custom service, by referencing an Ansible playbook or by including the Ansible play in the playbookContents field:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneService
    metadata:
      name: neutron-metadata-rate-limit
    spec:
      playbook: osp.edpm.neutron_metadata
      dataSources:
        - configMapRef:
            name: neutron-metadata-rate-limit
        - secretRef:
            name: neutron-ovn-metadata-agent-neutron-config
        - secretRef:
            name: nova-metadata-neutron-config
      tlsCerts:
        default:
          contents:
          - dnsnames
          - ips
          networks:
          - ctlplane
          issuer: osp-rootca-issuer-ovn
          keyUsages:
            - digital signature
            - key encipherment
            - client auth
      caCerts: combined-ca-bundle
      containerImageFields:
      - EdpmNeutronMetadataAgentImage
  6. Create the metadata-rate-limit service:

    $ oc apply -f neutron-metadata-rate-limit -n openstack

Verification

  • Confirm that the custom service is created:

    $ oc get openstackdataplaneservice neutron-metadata-rate-limit -o yaml -n openstack

9.5. Enabling and configuring FDB learning

In Red Hat OpenStack Services on OpenShift (RHOSO) environments, you can use forwarding database (FDB) learning to prevent traffic flooding on ports that have security disabled and belong to a provider network (network has an ML2/OVN localnet port). You can also set the maximum number of FDB entries that can be removed in a single transaction.

Prerequisites

  • You have the oc command line tool installed on your workstation.
  • You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.

Procedure

  1. Open your OpenStackControlPlane custom resource (CR) file, openstack_control_plane.yaml, on your workstation.
  2. Add the following configuration to the neutron service configuration:

    spec:
        neutron:
            template:
                customServiceConfig: |
                   [ovn]
                   localnet_learn_fdb = true
                   fdb_age_threshold = 300
                   [ovn_nb_global]
                   fdb_removal_limit = 50
    • localnet_learn_fdb - Enables FDB learning by allowing the localnet ports that are created for each provider network to learn the MAC addresses and store them in the FDB SB table.
    • fdb_age_threshold - Sets the maximum time (seconds) that the learned MACs stay in the FDB table, and prevents this table from growing indefinitely.
    • fdb_removal_limit - Limits the number of FDB table entries that can be removed in a single transaction by the aging function.

      Important

      If you disable port security on a provider network in an environment, you must set related forwarding database (FDB) learning and aging parameters.

  3. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  4. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

Legal Notice

Copyright © Red Hat.
Except as otherwise noted below, the text of and illustrations in this documentation are licensed by Red Hat under the Creative Commons Attribution–Share Alike 3.0 Unported license . If you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, the Red Hat logo, JBoss, Hibernate, and RHCE are trademarks or registered trademarks of Red Hat, LLC. or its subsidiaries in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS is a trademark or registered trademark of Hewlett Packard Enterprise Development LP or its subsidiaries in the United States and other countries.
The OpenStack® Word Mark and OpenStack logo are trademarks or registered trademarks of the Linux Foundation, used under license.
All other trademarks are the property of their respective owners.
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat Documentation

Legal Notice

Theme

© 2026 Red Hat
Back to top