Configuring networking services
Configuring the Networking service (neutron) for managing networking traffic in a Red Hat OpenStack Services on OpenShift environment
Abstract
Providing feedback on Red Hat documentation Copy linkLink copied to clipboard!
We appreciate your feedback. Tell us how we can improve the documentation.
To provide documentation feedback for Red Hat OpenStack Services on OpenShift (RHOSO), create a Jira issue in the OSPRH Jira project.
Procedure
- Log in to the Red Hat Atlassian Jira.
- Click the following link to open a Create Issue page: Create issue
- Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue.
- Click Create.
- Review the details of the bug you created.
Chapter 1. Introduction to OpenStack networking Copy linkLink copied to clipboard!
The Networking service (neutron) is the software-defined networking (SDN) component of Red Hat OpenStack Services on OpenShift (RHOSO). It manages traffic to and from virtual machine instances and provides core services such as routing, segmentation, DHCP, and metadata.
The Networking service also provides the API for virtual networking capabilities and management of switches, routers, ports, and firewalls.
1.1. Managing your RHOSO networks Copy linkLink copied to clipboard!
With the Red Hat OpenStack Services on OpenShift (RHOSO) Networking service (neutron) you can effectively meet your site’s networking goals. You can do the following tasks:
Customize the networks used in your data plane.
In RHOSO, the network configuration applied by default to the data plane nodes is the single NIC VLANs configuration. However, you can modify the network configuration that the OpenStack Operator applies for each data plane node set in your RHOSO environment.
Provide connectivity to VM instances within a project.
Project networks primarily enable general (non-privileged) projects to manage networks without involving administrators. These networks are entirely virtual and require virtual routers to interact with other project networks and external networks such as the Internet. Project networks also usually provide DHCP and metadata services to VM (virtual machine) instances. RHOSO supports the following project network types: flat, VLAN, and GENEVE.
Set ingress and egress limits for traffic on VM instances.
You can offer varying service levels for instances by using quality of service (QoS) policies to apply rate limits to egress and ingress traffic. You can apply QoS policies to individual ports. You can also apply QoS policies to a project network, where ports with no specific policy attached inherit the policy.
Control which projects can attach instances to a shared network.
Using role-based access control (RBAC) policies in the RHOSO Networking service, cloud administrators can remove the ability for some projects to create networks and can instead allow them to attach to pre-existing networks that correspond to their project.
Secure your network at the port level.
Security groups provide a container for virtual firewall rules that control ingress (inbound to instances) and egress (outbound from instances) network traffic at the port level. Security groups use a default deny policy and only contain rules that allow specific traffic. Each port can reference one or more security groups in an additive fashion. ML2/OVN uses the Open vSwitch firewall driver to translate security group rules to a configuration.
By default, security groups are stateful. In ML2/OVN deployments, you can also create stateless security groups. A stateless security group can provide significant performance benefits. Unlike stateful security groups, stateless security groups do not automatically allow returning traffic, so you must create a complimentary security group rule to allow the return of related traffic.
1.2. Networking service components Copy linkLink copied to clipboard!
The Red Hat OpenStack Services on OpenShift (RHOSO) Networking service (neutron) includes the following components:
API server
The RHOSO networking API includes support for Layer 2 networking and IP Address Management (IPAM), as well as an extension for a Layer 3 router construct that enables routing between Layer 2 networks and gateways to external networks. RHOSO networking includes a growing list of plug-ins that enable interoperability with various commercial and open source network technologies, including routers, switches, virtual switches and software-defined networking (SDN) controllers.
Modular Layer 2 (ML2) plug-in and agents
ML2 plugs and unplugs ports, creates networks or subnets, and provides IP addressing.
Messaging queue
Accepts and routes RPC requests between RHOSO services to complete API operations.
1.3. Modular Layer 2 (ML2) networking Copy linkLink copied to clipboard!
Modular Layer 2 (ML2) is the Red Hat OpenStack Services on OpenShift (RHOSO) networking core plug-in. The ML2 modular design enables the concurrent operation of mixed network technologies through mechanism drivers. Open Virtual Network (OVN) is the default mechanism driver used with ML2.
The ML2 framework distinguishes between the two kinds of drivers that can be configured:
- Type drivers
Define how an RHOSO network is technically realized.
Each available network type is managed by an ML2 type driver, and they maintain any required type-specific network state. Validating the type-specific information for provider networks, type drivers are responsible for the allocation of a free segment in project networks. Examples of type drivers are GENEVE, VLAN, and flat networks.
- Mechanism drivers
Define the mechanism to access an RHOSO network of a certain type.
The mechanism driver takes the information established by the type driver and applies it to the networking mechanisms that have been enabled. RHOSO uses the OVN mechanism driver.
Mechanism drivers can employ L2 agents, and by using RPC interact directly with external devices or controllers. You can use multiple mechanism and type drivers simultaneously to access different ports of the same virtual network.
1.4. ML2 network types Copy linkLink copied to clipboard!
You can operate multiple network segments at the same time. ML2 supports the use and interconnection of multiple network segments. You don’t have to bind a port to a network segment because ML2 binds ports to segments with connectivity. Depending on the mechanism driver, ML2 supports the following network segment types:
- Flat
- VLAN
- GENEVE tunnels
- Flat
- All virtual machine (VM) instances reside on the same network, which can also be shared with the hosts. No VLAN tagging or other network segregation occurs.
- VLAN
With RHOSO networking users can create multiple provider or project networks using VLAN IDs (802.1Q tagged) that correspond to VLANs present in the physical network. This allows instances to communicate with each other across the environment. They can also communicate with dedicated servers, firewalls, load balancers and other network infrastructure on the same Layer 2 VLAN.
You can use VLANs to segment network traffic for computers running on the same switch. This means that you can logically divide your switch by configuring the ports to be members of different networks — they are basically mini-LANs that you can use to separate traffic for security reasons.
For example, if your switch has 24 ports in total, you can assign ports 1-6 to VLAN200, and ports 7-18 to VLAN201. As a result, computers connected to VLAN200 are completely separate from those on VLAN201; they cannot communicate directly, and if they wanted to, the traffic must pass through a router as if they were two separate physical switches. Firewalls can also be useful for governing which VLANs can communicate with each other.
- GENEVE tunnels
- Generic Network Virtualization Encapsulation (GENEVE) recognizes and accommodates changing capabilities and needs of different devices in network virtualization. It provides a framework for tunneling rather than being prescriptive about the entire system. GENEVE defines the content of the metadata flexibly that is added during encapsulation and tries to adapt to various virtualization scenarios. It uses UDP as its transport protocol and is dynamic in size using extensible option headers. GENEVE supports unicast, multicast, and broadcast. The GENEVE type driver is compatible with the ML2/OVN mechanism driver.
1.5. Extension drivers for the RHOSO Networking service Copy linkLink copied to clipboard!
The Red Hat OpenStack Services on OpenShift (RHOSO) Networking service (neutron) is extensible. Extensions serve two purposes: they allow the introduction of new features in the API without requiring a version change and they allow the introduction of vendor specific niche functionality. Applications can programmatically list available extensions by performing a GET on the /extensions URI. Note that this is a versioned request; that is, an extension available in one API version might not be available in another.
The ML2 plug-in also supports extension drivers that allows other pluggable drivers to extend the core resources implemented in the ML2 plug-in for network objects. Examples of extension drivers include support for QoS, port security, and so on.
Chapter 2. Working with ML2/OVN Copy linkLink copied to clipboard!
Red Hat OpenStack Services on OpenShift (RHOSO) networks are managed by the Networking service (neutron). The core of the Networking service is the Modular Layer 2 (ML2) plug-in, and the default mechanism driver for RHOSO ML2 plug-in is the Open Virtual Networking (OVN) mechanism driver.
2.1. Open Virtual Network (OVN) Copy linkLink copied to clipboard!
Open Virtual Network (OVN), is a system to support logical network abstraction in virtual machine and container environments. OVN is used as the mechanism driver for the Red Hat OpenStack Services on OpenShift (RHOSO) Networking service (neutron).
Sometimes called open source virtual networking for Open vSwitch (OVS), OVN complements the existing capabilities of OVS to add native support for logical network abstractions, such as logical L2 and L3 overlays, security groups and services such as DHCP.
A physical network comprises physical wires, switches, and routers. A virtual network extends a physical network into a hypervisor or container platform, bridging VMs or containers into the physical network. An OVN logical network is a network implemented in software that is insulated from physical networks by tunnels or other encapsulations. This allows IP and other address spaces used in logical networks to overlap with those used on physical networks without causing conflicts. Logical network topologies can be arranged without regard for the topologies of the physical networks on which they run. Thus, VMs that are part of a logical network can migrate from one physical machine to another without network disruption.
The encapsulation layer prevents VMs and containers connected to a logical network from communicating with nodes on physical networks. For clustering VMs and containers, this can be acceptable or even desirable, but in many cases VMs and containers do need connectivity to physical networks. OVN provides multiple forms of gateways for this purpose.
An OVN deployment consists of several components:
- Cloud Management System (CMS)
- integrates OVN into a physical network by managing the OVN logical network elements and connecting the OVN logical network infrastructure to physical network elements. Some examples include OpenStack and OpenShift.
- OVN databases
- stores data representing the OVN logical and physical networks.
- Hypervisors
- run Open vSwitch and translate the OVN logical network into OpenFlow on a physical or virtual machine.
- Gateways
- extends a tunnel-based OVN logical network into a physical network by forwarding packets between tunnels and the physical network infrastructure.
2.2. List of components in the RHOSO OVN architecture Copy linkLink copied to clipboard!
Open Virtual Network (OVN) provides networking services for Red Hat OpenStack Services on OpenShift (RHOSO) environments. As illustrated in Figure 2.1, the OVN architecture consists of the following components and services:
- Networking service
- This service runs the OpenStack Networking API server, which provides the API for end-users and services to interact with OpenStack Networking. This server also integrates with the underlying database to store and retrieve project network, router, and load balancer details, among others.
- Compute node
- This node hosts the hypervisor that runs the virtual machines, also known as instances. A Compute node must be wired directly to the network in order to provide external connectivity for instances.
- ML2 plug-in with OVN mechanism driver
- The ML2 plug-in translates the OpenStack-specific networking configuration into the platform-neutral OVN logical networking configuration. It typically runs on the RHOSO control plane on OpenShift worker nodes.
- OVN northbound (NB) database (
ovn-nb) This database stores the logical OVN networking configuration from the OVN ML2 plugin. It typically runs on the RHOSO control plane and listens on TCP port
6641.The northbound database (
OVN_Northbound) serves as the interface between OVN and a cloud management system such as RHOSO. RHOSO produces the contents of the northbound database.The northbound database contains the current desired state of the network, presented as a collection of logical ports, logical switches, logical routers, and more. Every RHOSO Networking service (neutron) object is represented in a table in the northbound database.
- OVN northbound service (
ovn-northd) - This service converts the logical networking configuration from the OVN NB database to the logical data path flows and populates these on the OVN Southbound database. It typically runs on the RHOSO control plane.
- OVN southbound (SB) database (
ovn-sb) This database stores the converted logical data path flows. It typically runs on the RHOSO control plane and listens on TCP port
6642.The southbound database (
OVN_Southbound) holds the logical and physical configuration state for the OVN system to support virtual network abstraction. Theovn-controlleruses the information in this database to configure OVS to satisfy Networking service (neutron) requirements.
The schema file for the NB database is located in /usr/share/ovn/ovn-nb.ovsschema, and the SB database schema file is in /usr/share/ovn/ovn-sb.ovsschema.
- OVS database server (OVSDB)
-
Hosts the OVN Northbound and Southbound databases. Also interacts with
ovs-vswitchdto host the OVS databaseconf.db. - OVN controller (
ovn-controller) - This controller connects to the OVN SB database and acts as the open vSwitch controller to control and monitor network traffic. It runs on all Compute and gateway nodes.
- OVN metadata agent (
ovn-metadata-agent) This agent creates the
haproxyinstances for managing the OVS interfaces, network namespaces and HAProxy processes used to proxy metadata API requests. The agent runs on all Compute and gateway nodes.The OVN Networking service creates a unique network namespace for each virtual network that enables the metadata service. Each network accessed by the instances on the Compute node has a corresponding metadata namespace (ovnmeta-<network_uuid>).
OpenStack guest instances access the Networking metadata service available at the link-local IP address: 169.254.169.254. The
neutron-ovn-metadata-agenthas access to the host networks where the Compute metadata API exists. Each HAProxy is in a network namespace that is not able to reach the appropriate host network. HaProxy adds the necessary headers to the metadata API request and then forwards the request to theneutron-ovn-metadata-agentover a UNIX domain socket.
Figure 2.1. OVN architecture in a RHOSO environment
2.3. Layer 3 high availability with OVN Copy linkLink copied to clipboard!
OVN supports Layer 3 high availability (L3 HA) without any special configuration in Red Hat OpenStack Services on OpenShift (RHOSO) environments,
When you create a router, do not use --ha option because OVN routers are highly available by default. Openstack router create commands that include the --ha option fail.
OVN automatically schedules the router port to all available gateway nodes that can act as an L3 gateway on the specified external network. OVN L3 HA uses the gateway_chassis column in the OVN Logical_Router_Port table. Most functionality is managed by OpenFlow rules with bundled active_passive outputs. The ovn-controller handles the Address Resolution Protocol (ARP) responder and router enablement and disablement. Gratuitous ARPs for FIPs and router external addresses are also periodically sent by the ovn-controller.
L3HA uses OVN to balance the routers back to the original gateway nodes to avoid any nodes becoming a bottleneck.
- BFD monitoring
- OVN uses the Bidirectional Forwarding Detection (BFD) protocol to monitor the availability of the gateway nodes. This protocol is encapsulated on top of the GENEVE tunnels established from node to node.
Each gateway node monitors all the other gateway nodes in a star topology in the deployment. Gateway nodes also monitor the compute nodes to let the gateways enable and disable routing of packets and ARP responses and announcements.
Each compute node uses BFD to monitor each gateway node and automatically steers external traffic, such as source and destination Network Address Translation (SNAT and DNAT), through the active gateway node for a given router. Compute nodes do not need to monitor other compute nodes.
External network failures are not detected as would happen with an ML2-OVS configuration.
L3 HA for OVN supports the following failure modes:
- The gateway node becomes disconnected from the network (tunneling interface).
-
ovs-vswitchdstops (ovs-switchdis responsible for BFD signaling) -
ovn-controllerstops (ovn-controllerremoves itself as a registered node).
This BFD monitoring mechanism only works for link failures, not for routing failures.
2.4. Active-active clustered database service model Copy linkLink copied to clipboard!
On Red Hat OpenStack Services on OpenShift (RHOSO) environments, OVN uses a clustered database service model that applies the Raft consensus algorithm to enhance performance of OVS database protocol traffic and provide faster, more reliable failover handling.
A clustered database operates on a cluster of at least three database servers on different hosts. Servers use the Raft consensus algorithm to synchronize writes and share network traffic continuously across the cluster. The cluster elects one server as the leader. All servers in the cluster can handle database read operations, which mitigates potential bottlenecks on the control plane. Write operations are handled by the cluster leader.
If a server fails, a new cluster leader is elected and the traffic is redistributed among the remaining operational servers. The clustered database service model handles failovers more efficiently than the pacemaker-based model did. This mitigates related downtime and complications that can occur with longer failover times.
The leader election process requires a majority, so the fault tolerance capacity is limited by the highest odd number in the cluster. For example, a three-server cluster continues to operate if one server fails. A five-server cluster tolerates up to two failures. Increasing the number of servers to an even number does not increase fault tolerance. For example, a four-server cluster cannot tolerate more failures than a three-server cluster.
Most RHOSO deployments use three servers.
Clusters larger than five servers also work, with every two added servers allowing the cluster to tolerate an additional failure, but write performance decreases.
2.5. SR-IOV with ML2/OVN and native OVN DHCP Copy linkLink copied to clipboard!
You can deploy a custom node set to use SR-IOV in an ML2/OVN deployment with native OVN DHCP in Red Hat OpenStack Services on OpenShift (RHOSO) environments.
- Limitations
The following limitations apply to the use of SR-IOV with ML2/OVN and native OVN DHCP in this release.
- All external ports are scheduled on a single gateway node because there is only one HA Chassis Group for all of the ports.
- North/south routing on VF(direct) ports on VLAN tenant networks does not work with SR-IOV because the external ports are not colocated with the logical router’s gateway ports.
Chapter 3. Configuring OVN gateways for a Red Hat OpenStack Services on OpenShift deployment Copy linkLink copied to clipboard!
An OVN gateway connects the logical OpenStack tenant network to a physical external network. Many RHOSO environments have at least one OVN gateway and might have more than one physical external network and more than one OVN gateway.
Some environments do not include an OVN gateway. For example, an environment might not have an OVN gateway because connectivity is not required, because the environment does not use centralized floating IPs or routers and workloads directly connected to provider networks, or because some other connection method is used.
You can choose where OVN gateways are configured. OVN gateway location choices include the following:
- Control plane
- OVN gateways on RHOCP worker nodes that host the OpenStack controller services. Place the OVN gateway on a dedicated NIC whose sole purpose is to provide an interface to the OVN gateway.
- Data plane
- OVN gateways on dedicated Networker nodes on the data plane.
Control plane OVN gateways can be subject to more disruption than data plane OVN gateways.
3.1. Configuring a control plane OVN gateway with a dedicated NIC Copy linkLink copied to clipboard!
You can place OVN gateways on dedicated NICs on the control plane nodes. This reduces the potential for interruption but requires an additional NIC.
Prerequisites
-
You have the
occommand line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges. - Each RHOCP worker node that hosts the RHOSO control plane has a NIC dedicated to an OVN gateway. Use the same NIC name for the dedicated NIC on each node. In addition, each worker node has at least the two NICs described in Red Hat OpenShift Container Platform cluster requirements.
- Your OpenStackControlPlane custom resource (CR) file, openstack_control_plane.yaml, exists on your workstation.
Procedure
-
Open the
OpenStackControlPlaneCR definition file,openstack_control_plane.yaml. Add the following ovnController configuration, including
nicMappings, to theovnservice configuration:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstack spec: ... ovn: template: ovnController: networkAttachment: tenant nicMappings: <network_name: nic_name>-
Replace
<network_name>with the name of the physical provider network your gateway is on. This should match the value of the--provider-physical-networkargument to theopenstack network createcommand used to create the network. For example,datacentre. -
Replace
<nic_name>with the name of the NIC connecting to the gateway network, such asenp6s0. -
Optional: Add additional
<network_name>:<nic_name>pairs undernicMappingsas required.
-
Replace
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackThe
ovn-operatorcreates the network attachment definitions, adds them to the pods, creates an external bridge, and configuresexternal-ids:ovn-bridge-mappings. The settingexternal-ids:ovn-cms-options=enable-chassis-as-gwis configured by default.Wait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup startedThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of the get command to track deployment progress.Confirm that the control plane is deployed by reviewing the pods in the
openstacknamespace:$ oc get pods -n openstackThe control plane is deployed when all the pods are either completed or running. Verify that
ovn-controllerandovn-controller-ovspods are running, and that the number of running pods is equal to the number of OCP control plane nodes where OpenStack control plane services are running.
Verification
Run a remote shell command on the
OpenStackClientpod to confirm that the OVN Controller Gateway Agents are running on the control plane nodes:$ oc rsh -n openstack openstackclient openstack network agent list- Example output
+--------------------------------------+------------------------------+---------+ | ID | agent_type | host | +--------------------------------------+----------------------------------------+ | 5335c34d-9233-47bd-92f1-fc7503270783 | OVN Controller Gateway agent | ctrl0 | | ff66288c-5a7c-41fb-ba54-6c781f95a81e | OVN Controller Gateway agent | ctrl1 | | 5335c34d-9233-47bd-92f1-fc7503270783 | OVN Controller Gateway agent | ctrl2 | +--------------------------------------+----------------------------------------+
3.2. Configuring RHOSO with no control plane OVN gateways Copy linkLink copied to clipboard!
You can configure a deployment with no control plane OVN gateways. For example, you configure data plane OVN gateways only, or you do not configure any OVN gateways.
Configuring a deployment with no control plane OVN gateways requires omitting the ovnController configuration from the control plane custom resource (CR).
Prerequisites
- RHOSO 18.0.3 (Feature Release 1) or later.
-
You have the
occommand line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges.
Procedure
- Open your OpenStackControlPlane custom resource (CR) file, openstack_control_plane.yaml, on your workstation.
If there is an
ovnControllersection:-
Remove the
ovnControllersection. Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstack
-
Remove the
Chapter 4. Customizing data plane networks Copy linkLink copied to clipboard!
In a Red Hat OpenStack Services on OpenShift (RHOSO) environment, the network configuration applied by default to the data plane nodes is the single NIC VLANs configuration. However, you can modify the network configuration that the OpenStack Operator applies.
4.1. Applying custom network configuration to a node set Copy linkLink copied to clipboard!
You can customize the network configuration for each data plane node set in your Red Hat OpenStack Services on OpenShift (RHOSO) environment.
Prerequisites
-
You have the
occommand line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges.
Procedure
-
Open the
OpenStackDataPlaneNodeSetCR definition file for the node set you want to update, for example,my_data_plane_node_set.yaml. Add the required network configuration or modify the existing configuration. Place the configuration in the
edpm_network_config_templateunderansibleVars:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: my-data-plane-node-set spec: ... nodeTemplate: ... ansible: ansibleVars: edpm_network_config_template: | --- Network configuration options here ...When modifying your network configuration, refer to Section 4.2, “Network interface configuration options”.
-
Save the
OpenStackDataPlaneNodeSetCR definition file. Apply the updated
OpenStackDataPlaneNodeSetCR configuration:$ oc apply -f my_data_plane_node_set.yamlVerify that the data plane resource has been updated:
$ oc get openstackdataplanenodeset- Sample output
NAME STATUS MESSAGE my-data-plane-node-set False Deployment not started
Create a file on your workstation to define the
OpenStackDataPlaneDeploymentCR, for example,my_data_plane_deploy.yaml:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: my-data-plane-deployTipGive the definition file and the
OpenStackDataPlaneDeploymentCR a unique and descriptive name that indicates the purpose of the modified node set.Add the
OpenStackDataPlaneNodeSetCR that you modified:spec: nodeSets: - my-data-plane-node-set-
Save the
OpenStackDataPlaneDeploymentCR deployment file. Deploy the modified
OpenStackDataPlaneNodeSetCR:$ oc create -f my_data_plane_deploy.yaml -n openstackYou can view the Ansible logs while the deployment executes:
$ oc get pod -l app=openstackansibleee -n openstack -w $ oc logs -l app=openstackansibleee -n openstack -f \ --max-log-requests 10Verify that the modified
OpenStackDataPlaneNodeSetCR is deployed:- Example
$ oc get openstackdataplanedeployment -n openstack- Sample output
NAME STATUS MESSAGE my-data-plane-node-set True Setup Complete
Repeat the
oc getcommand until you see theNodeSet Readymessage:- Example
$ oc get openstackdataplanenodeset -n openstack- Sample output
NAME STATUS MESSAGE my-data-plane-node-set True NodeSet ReadyFor information on the meaning of the returned status, see Data plane conditions and states in the Deploying Red Hat OpenStack Services on OpenShift guide.
4.2. Network interface configuration options Copy linkLink copied to clipboard!
Use the following tables to understand the available options for configuring network interfaces for Red Hat OpenStack Services on OpenShift (RHOSO) environments.
Linux bridges are not supported in RHOSO. Instead, use methods such as Linux bonds and dedicated NICs for RHOSO traffic.
4.2.1. interface Copy linkLink copied to clipboard!
Defines a single network interface. The network interface name uses either the actual interface name (eth0, eth1, enp0s25) or a set of numbered interfaces (nic1, nic2, nic3). The network interfaces of hosts within a role do not have to be exactly the same when you use numbered interfaces such as nic1 and nic2, instead of named interfaces such as eth0 and eno2. For example, one host might have interfaces em1 and em2, while another has eno1 and eno2, but you can refer to the NICs of both hosts as nic1 and nic2.
The order of numbered interfaces corresponds to the order of named network interface types:
ethXinterfaces, such aseth0,eth1, and so on.Names appear in this format when consistent device naming is turned off in
udev.enoXandemXinterfaces, such aseno0,eno1,em0,em1, and so on.These are usually on-board interfaces.
enXand any other interfaces, sorted alpha numerically, such asenp3s0,enp3s1,ens3, and so on.These are usually add-on interfaces.
The numbered NIC scheme includes only live interfaces, for example, if the interfaces have a cable attached to the switch. If you have some hosts with four interfaces and some with six interfaces, use nic1 to nic4 and attach only four cables on each host.
| Option | Default | Description |
|---|---|---|
|
|
Name of the interface. The network interface | |
|
| False | Use DHCP to get an IP address. |
|
| False | Use DHCP to get a v6 IP address. |
|
| A list of IP addresses assigned to the interface. | |
|
| A list of routes assigned to the interface. For more information, see Section 4.2.7, “routes”. | |
|
| 1500 | The maximum transmission unit (MTU) of the connection. |
|
| False |
Defines the interface as the primary interface. Required only when the |
|
| False | Write the device alias configuration instead of the system names. |
|
| None | Arguments that you want to pass to the DHCP client. |
|
| None | List of DNS servers that you want to use for the interface. |
|
|
Set this option to |
- Example
... edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: interface name: nic2 ...
4.2.2. vlan Copy linkLink copied to clipboard!
Defines a VLAN. Use the VLAN ID and subnet passed from the parameters section.
vlanoptions
| Option | Default | Description |
|---|---|---|
| vlan_id | The VLAN ID. | |
| device | The parent device to attach the VLAN. Use this parameter when the VLAN is not a member of an OVS bridge. For example, use this parameter to attach the VLAN to a bonded interface device. | |
| use_dhcp | False | Use DHCP to get an IP address. |
| use_dhcpv6 | False | Use DHCP to get a v6 IP address. |
| addresses | A list of IP addresses assigned to the VLAN. | |
| routes | A list of routes assigned to the VLAN. For more information, see Section 4.2.7, “routes”. | |
| mtu | 1500 | The maximum transmission unit (MTU) of the connection. |
| primary | False | Defines the VLAN as the primary interface. |
| persist_mapping | False | Write the device alias configuration instead of the system names. |
| dhclient_args | None | Arguments that you want to pass to the DHCP client. |
| dns_servers | None | List of DNS servers that you want to use for the VLAN. |
- Example
... edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: ... - type: vlan device: nic{{ loop.index + 1 }} mtu: {{ lookup(vars, networks_lower[network] ~ _mtu) }} vlan_id: {{ lookup(vars, networks_lower[network] ~ _vlan_id) }} addresses: - ip_netmask: {{ lookup(vars, networks_lower[network] ~ _ip) }}/{{ lookup(vars, networks_lower[network] ~ _cidr) }} routes: {{ lookup(vars, networks_lower[network] ~ _host_routes) }} ...- Example - creating a VLAN on an
ovs_bridge To create a VLAN on an
ovs_bridge, you must place the VLAN configuration under thememberssection:... network_config: - type: ovs_bridge name: br0 use_dhcp: false members: - type: interface name: nic5 - type: vlan vlan_id: 138 use_dhcp: false ...- Example - creating a VLAN on an
ovs_user_bridge To create a VLAN on an
ovs_user_bridge, you must place the VLAN configuration under thememberssection. The members must be either anovs_dpdk_bondor andovs_dpdk_port:... network_config: -type: ovs_user_bridge name: br-link members: -type: ovs_dpdk_bond name: dpdkbond0 mtu: 9000 rx_queue: 4 members: -type: ovs_dpdk_port name: dpdk0 members: -type: interface name: nic2 -type: ovs_dpdk_port name: dpdk1 members: -type: interface name: nic3 -type: vlan vlan_id:138 use_dhcp: false ...
4.2.3. ovs_bridge Copy linkLink copied to clipboard!
Defines a bridge in Open vSwitch (OVS), which connects multiple interface, ovs_bond, and vlan objects together.
The network interface type, ovs_bridge, takes a parameter name.
Placing Control group networks on the ovs_bridge interface can cause down time. The OVS bridge connects to the Networking service (neutron) server to obtain configuration data. If the OpenStack control traffic, typically the Control Plane and Internal API networks, is placed on an OVS bridge, then connectivity to the neutron server is lost whenever you upgrade OVS, or the OVS bridge is restarted by the admin user or process. If downtime is not acceptable in these circumstances, then you must place the Control group networks on a separate interface or bond rather than on an OVS bridge:
- You can achieve a minimal setting when you put the Internal API network on a VLAN on the provisioning interface and the OVS bridge on a second interface.
- To implement bonding, you need at least two bonds (four network interfaces). Place the control group on a Linux bond. If the switch does not support LACP fallback to a single interface for PXE boot, then this solution requires at least five NICs.
If you have multiple bridges, you must use distinct bridge names other than accepting the default name of bridge_name. If you do not use distinct names, then during the converge phase, two network bonds are placed on the same bridge.
ovs_bridgeoptions
| Option | Default | Description |
|---|---|---|
| name | Name of the bridge. | |
| use_dhcp | False | Use DHCP to get an IP address. |
| use_dhcpv6 | False | Use DHCP to get a v6 IP address. |
| addresses | A list of IP addresses assigned to the bridge. | |
| routes | A list of routes assigned to the bridge. For more information, see Section 4.2.7, “routes”. | |
| mtu | 1500 | The maximum transmission unit (MTU) of the connection. |
| members | A sequence of interface, VLAN, and bond objects that you want to use in the bridge. | |
| ovs_options | A set of options to pass to OVS when creating the bridge. | |
| ovs_extra | A set of options to to set as the OVS_EXTRA parameter in the network configuration file of the bridge. | |
| defroute | True |
Use a default route provided by the DHCP service. Only applies when you enable |
| persist_mapping | False | Write the device alias configuration instead of the system names. |
| dhclient_args | None | Arguments that you want to pass to the DHCP client. |
| dns_servers | None | List of DNS servers that you want to use for the bridge. |
- Example
... edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: br-bond dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} members: - type: ovs_bond name: bond1 mtu: {{ min_viable_mtu }} ovs_options: {{ bound_interface_ovs_options }} members: - type: interface name: nic2 mtu: {{ min_viable_mtu }} primary: true - type: interface name: nic3 mtu: {{ min_viable_mtu }} ...
4.2.4. Network interface bonding Copy linkLink copied to clipboard!
You can bundle multiple physical NICs together to form a single logical channel known as a bond. You can configure bonds to provide redundancy for high availability systems or increased throughput.
Red Hat OpenStack Platform supports Open vSwitch (OVS) kernel bonds, OVS-DPDK bonds, and Linux kernel bonds.
| Bond type | Type value | Allowed bridge types | Allowed members |
|---|---|---|---|
| OVS kernel bonds |
|
|
|
| OVS-DPDK bonds |
|
|
|
| Linux kernel bonds |
|
|
|
Do not combine ovs_bridge and ovs_user_bridge on the same node.
ovs_bondDefines a bond in Open vSwitch (OVS) to join two or more
interfacestogether. This helps with redundancy and increases bandwidth.Expand Table 4.3. ovs_bond options Option Default Description name
Name of the bond.
use_dhcp
False
Use DHCP to get an IP address.
use_dhcpv6
False
Use DHCP to get a v6 IP address.
addresses
A list of IP addresses assigned to the bond.
routes
A list of routes assigned to the bond. For more information, see Section 4.2.7, “routes”.
mtu
1500
The maximum transmission unit (MTU) of the connection.
primary
False
Defines the interface as the primary interface.
members
A sequence of interface objects that you want to use in the bond.
ovs_options
A set of options to pass to OVS when creating the bond. For more information, see Table 4.4, “
ovs_optionsparameters for OVS bonds”.ovs_extra
A set of options to set as the OVS_EXTRA parameter in the network configuration file of the bond.
defroute
True
Use a default route provided by the DHCP service. Only applies when you enable
use_dhcporuse_dhcpv6.persist_mapping
False
Write the device alias configuration instead of the system names.
dhclient_args
None
Arguments that you want to pass to the DHCP client.
dns_servers
None
List of DNS servers that you want to use for the bond.
Expand Table 4.4. ovs_options parameters for OVS bonds ovs_optionDescription bond_mode=balance-slbSource load balancing (slb) balances flows based on source MAC address and output VLAN, with periodic rebalancing as traffic patterns change. When you configure a bond with the
balance-slbbonding option, there is no configuration required on the remote switch. The Networking service (neutron) assigns each source MAC and VLAN pair to a link and transmits all packets from that MAC and VLAN through that link. A simple hashing algorithm based on source MAC address and VLAN number is used, with periodic rebalancing as traffic patterns change. Thebalance-slbmode is similar to mode 2 bonds used by the Linux bonding driver. You can use this mode to provide load balancing even when the switch is not configured to use LACP.bond_mode=active-backupWhen you configure a bond using
active-backupbond mode, the Networking service keeps one NIC in standby. The standby NIC resumes network operations when the active connection fails. Only one MAC address is presented to the physical switch. This mode does not require switch configuration, and works when the links are connected to separate switches. This mode does not provide load balancing.lacp=[active | passive | off]Controls the Link Aggregation Control Protocol (LACP) behavior. Only certain switches support LACP. If your switch does not support LACP, use
bond_mode=balance-slborbond_mode=active-backup.other-config:lacp-fallback-ab=trueSet active-backup as the bond mode if LACP fails.
other_config:lacp-time=[fast | slow]Set the LACP heartbeat to one second (fast) or 30 seconds (slow). The default is slow.
other_config:bond-detect-mode=[miimon | carrier]Set the link detection to use miimon heartbeats (miimon) or monitor carrier (carrier). The default is carrier.
other_config:bond-miimon-interval=100If using miimon, set the heartbeat interval (milliseconds).
bond_updelay=1000Set the interval (milliseconds) that a link must be up to be activated to prevent flapping.
other_config:bond-rebalance-interval=10000Set the interval (milliseconds) that flows are rebalancing between bond members. Set this value to zero to disable flow rebalancing between bond members.
- Example - OVS bond
... edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: ... members: - type: ovs_bond name: bond1 mtu: {{ min_viable_mtu }} ovs_options: {{ bond_interface_ovs_options }} members: - type: interface name: nic2 mtu: {{ min_viable_mtu }} primary: true - type: interface name: nic3 mtu: {{ min_viable_mtu }}- Example - OVS DPDK bond
In this example, a bond is created as part of an OVS user space bridge:
edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: ... members: - type: ovs_user_bridge name: br-dpdk0 members: - type: ovs_dpdk_bond name: dpdkbond0 rx_queue: {{ num_dpdk_interface_rx_queues }} members: - type: ovs_dpdk_port name: dpdk0 members: - type: interface name: nic4 - type: ovs_dpdk_port name: dpdk1 members: - type: interface name: nic5
4.2.5. LACP with OVS bonding modes Copy linkLink copied to clipboard!
You can use Open vSwitch (OVS) bonds with the optional Link Aggregation Control Protocol (LACP). LACP is a negotiation protocol that creates a dynamic bond for load balancing and fault tolerance.
Use the following table to understand support compatibility for OVS kernel and OVS-DPDK bonded interfaces in conjunction with LACP options.
Do not use OVS bonds on control and storage networks. Instead, use Linux bonds with VLAN and LACP.
If you use OVS bonds, and restart the OVS or the neutron agent for updates, hot fixes, and other events, the control plane can be disrupted.
| Objective | OVS bond mode | Compatible LACP options | Notes |
| High availability (active-passive) |
|
| |
| Increased throughput (active-active) |
|
|
|
|
|
|
|
4.2.6. linux_bond Copy linkLink copied to clipboard!
Defines a Linux bond that joins two or more interfaces together. This helps with redundancy and increases bandwidth. Ensure that you include the kernel-based bonding options in the bonding_options parameter.
| Option | Default | Description |
|---|---|---|
| name | Name of the bond. | |
| use_dhcp | False | Use DHCP to get an IP address. |
| use_dhcpv6 | False | Use DHCP to get a v6 IP address. |
| addresses | A list of IP addresses assigned to the bond. | |
| routes | A list of routes assigned to the bond. See Section 4.2.7, “routes”. | |
| mtu | 1500 | The maximum transmission unit (MTU) of the connection. |
| members | A sequence of interface objects that you want to use in the bond. | |
| bonding_options |
A set of options when creating the bond. See | |
| defroute | True |
Use a default route provided by the DHCP service. Only applies when you enable |
| persist_mapping | False | Write the device alias configuration instead of the system names. |
| dhclient_args | None | Arguments that you want to pass to the DHCP client. |
| dns_servers | None | List of DNS servers that you want to use for the bond. |
bonding_optionsparameters for Linux bonds-
The
bonding_optionsparameter sets the specific bonding options for the Linux bond. See the Linux bonding examples that follow this table:
bonding_options | Description |
|---|---|
|
|
Sets the bonding mode, which in the example is |
|
| Defines whether LACP packets are sent every 1 second, or every 30 seconds. |
|
| Defines the minimum amount of time that an interface must be active before it is used for traffic. This minimum configuration helps to mitigate port flapping outages. |
|
| The interval in milliseconds that is used for monitoring the port state using the MIIMON functionality of the driver. |
- Example - Linux bond
... edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: linux_bond name: bond1 mtu: {{ min_viable_mtu }} bonding_options: "mode=802.3ad lacp_rate=fast updelay=1000 miimon=100 xmit_hash_policy=layer3+4" members: type: interface name: ens1f0 mtu: {{ min_viable_mtu }} primary: true type: interface name: ens1f1 mtu: {{ min_viable_mtu }} ...- Example - Linux bond: bonding two interfaces
... edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: linux_bond name: bond1 members: - type: interface name: nic2 - type: interface name: nic3 bonding_options: "mode=802.3ad lacp_rate=[fast|slow] updelay=1000 miimon=100" ...- Example - Linux bond set to
active-backupmode with one VLAN .... edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: linux_bond name: bond_api bonding_options: "mode=active-backup" use_dhcp: false dns_servers: get_param: DnsServers members: - type: interface name: nic3 primary: true - type: interface name: nic4 - type: vlan vlan_id: {{ lookup(vars, networks_lower[network] ~ _vlan_id) }} device: bond_api addresses: - ip_netmask: get_param: InternalApiIpSubnet- Example - Linux bond on OVS bridge
In this example, the bond is set to
802.3adwith LACP mode and one VLAN:... edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: br-tenant use_dhcp: false mtu: 9000 members: - type: linux_bond name: bond_tenant bonding_options: "mode=802.3ad updelay=1000 miimon=100" use_dhcp: false dns_servers: get_param: DnsServers members: - type: interface name: p1p1 primary: true - type: interface name: p1p2 - type: vlan vlan_id: {get_param: TenantNetworkVlanID} addresses: - ip_netmask: {get_param: TenantIpSubnet} ...
4.2.7. routes Copy linkLink copied to clipboard!
Defines a list of routes to apply to a network interface, VLAN, bridge, or bond.
| Option | Default | Description |
|---|---|---|
| ip_netmask | None | IP and netmask of the destination network. |
| default | False |
Sets this route to a default route. Equivalent to setting |
| next_hop | None | The IP address of the router used to reach the destination network. |
- Example - routes
... edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: br-tenant ... routes: {{ [ctlplane_host_routes] | flatten | unique }} ...
4.3. Example custom network interfaces Copy linkLink copied to clipboard!
The following example illustrates how you can use a template to customize network interfaces for Red Hat OpenStack Services on OpenShift (RHOSO) environments.
- Example
-
This template example configures the control group separate from the OVS bridge. The template uses five network interfaces and assigns a number of tagged VLAN devices to the numbered interfaces. On
nic2andnic3the template creates a linux bond for control plane traffic. The template creates OVS bridges for the RHOSO data plane onnic4andnic5.
edpm_network_config_os_net_config_mappings:
edpm-compute-0:
dmiString: system-serial-number
id: 3V3J4V3
nic1: ec:2a:72:40:ca:2e
nic2: 6c:fe:54:3f:8a:00
nic3: 6c:fe:54:3f:8a:01
nic4: 6c:fe:54:3f:8a:02
nic5: 6c:fe:54:3f:8a:03
nic6: e8:eb:d3:33:39:12
nic7: e8:eb:d3:33:39:13
edpm_network_config_template: |
---
{% set mtu_list = [ctlplane_mtu] %}
{% for network in nodeset_networks %}
{{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
{%- endfor %}
{% set min_viable_mtu = mtu_list | max %}
- type: interface
name: nic1
use_dhcp: false
use_dhcpv6: false
- type: linux_bond
name: bond_api
use_dhcp: false
use_dhcpv6: false
bonding_options: "mode=active-backup"
dns_servers: {{ ctlplane_dns_nameservers }}
addresses:
ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
routes:
- default: true
next_hop: 192.168.122.1
members:
- type: interface
name: nic2
primary: true
- type: interface
name: nic3
{% for network in nodeset_networks if network not in ['external', 'tenant'] %}
- type: vlan
mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
device: bond_api
addresses:
- ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
{% endfor %}
- type: ovs_bridge
name: br-access
use_dhcp: false
use_dhcpv6: false
members:
- type: linux_bond
name: bond_data
mtu: {{ min_viable_mtu }}
bonding_options: "mode=active-backup"
members:
- type: interface
name: nic4
- type: interface
name: nic5
- type: vlan
vlan_id: {{ lookup('vars', networks_lower['tenant'] ~ '_vlan_id') }}
mtu: {{ lookup('vars', networks_lower['tenant'] ~ '_mtu') }}
addresses:
- ip_netmask:
{{ lookup('vars', networks_lower['tenant'] ~ '_ip') }}/{{ lookup('vars', networks_lower['tenant'] ~ '_cidr') }}
routes: {{ lookup('vars', networks_lower['tenant'] ~ '_host_routes') }}
Chapter 5. Configuring Networker nodes Copy linkLink copied to clipboard!
In a Red Hat OpenStack Services on OpenShift (RHOSO) environment, you can add Networker nodes to the RHOSO data plane. Networker nodes can serve as gateways to external networks.
With or without gateways, Networker nodes can serve other purposes as well. For example, Networker nodes are required when you deploy the neutron-dhcp-agent in a RHOSO environment that has a routed spine-leaf network topology with DHCP relays running on leaf nodes. Networker nodes can also provide metadata for SR-IOV ports.
If your NICs support DPDK, you can enable DPDK on the Networker node interfaces to accelerate gateway traffic processing.
Networker nodes are similar to other RHOSO data plane nodes such as Compute nodes. Like Compute nodes, Networker nodes use the RHEL 9.4 or 9.6 operating system. Networker nodes and Compute nodes share some common services and configuration features, and each has a set of role-specific services and configurations. For example, unlike Compute nodes, Networker nodes do not require the Nova or libvirt services.
A data plane typically consists of multiple OpenStackDataPlaneNodeSet custom resources (CRs) to define sets of nodes with different configurations and roles. For example, one node set might define your data plane Networker nodes. Others might define functionally related sets of Compute nodes.
You can use pre-provisioned or unprovisioned nodes in an OpenStackDataPlaneNodeSet CR:
- Pre-provisioned node: You have used your own tooling to install the operating system on the node before adding it to the data plane.
- Unprovisioned node: The node does not have an operating system installed before you add it to the data plane. The node is provisioned by using the Cluster Baremetal Operator (CBO) as part of the data plane creation and deployment process.
You cannot include both pre-provisioned and unprovisioned nodes in the same OpenStackDataPlaneNodeSet CR.
To create and deploy a data plane with or without Networker nodes, you must perform the following tasks:
-
Create a
SecretCR for each node set for Ansible to use to execute commands on the data plane nodes (Networker nodes and Compute nodes). Create the
OpenStackDataPlaneNodeSetCRs that define the nodes and layout of the data plane.One of the following procedures describes how to create Networker node sets with pre-provisioned nodes. The other describes how to create Networker node sets with unprovisioned bare-metal nodes that must be provisioned during the node set deployment.
-
Create the
OpenStackDataPlaneDeploymentCR that triggers the Ansible execution that deploys and configures the software for the specified list ofOpenStackDataPlaneNodeSetCRs.
5.1. Prerequisites Copy linkLink copied to clipboard!
- A functional control plane, created with the OpenStack Operator.
-
You are logged on to a workstation that has access to the Red Hat OpenShift Container Platform (RHOCP) cluster as a user with
cluster-adminprivileges.
5.2. Creating the data plane secrets Copy linkLink copied to clipboard!
You must create the Secret custom resources (CRs) that the data plane requires to be able to operate. The Secret CRs are used by the data plane nodes to secure access between nodes, to register the node operating systems with the Red Hat Customer Portal, to enable node repositories, and to provide Compute nodes with access to libvirt.
To enable secure access between nodes, you must generate two SSH keys and create an SSH key Secret CR for each key:
An SSH key to enable Ansible to manage the RHEL nodes on the data plane. Ansible executes commands with this user and key. You can create an SSH key for each
OpenStackDataPlaneNodeSetCR in your data plane.- An SSH key to enable migration of instances between Compute nodes.
Prerequisites
-
Pre-provisioned nodes are configured with an SSH public key in the
$HOME/.ssh/authorized_keysfile for a user with passwordlesssudoprivileges. For more information, see Managing sudo access in the RHEL Configuring basic system settings guide.
Procedure
For unprovisioned nodes, create the SSH key pair for Ansible:
$ ssh-keygen -f <key_file_name> -N "" -t rsa -b 4096-
Replace
<key_file_name>with the name to use for the key pair.
-
Replace
Create the
SecretCR for Ansible and apply it to the cluster:$ oc create secret generic dataplane-ansible-ssh-private-key-secret \ --save-config \ --dry-run=client \ --from-file=ssh-privatekey=<key_file_name> \ --from-file=ssh-publickey=<key_file_name>.pub \ [--from-file=authorized_keys=<key_file_name>.pub] -n openstack \ -o yaml | oc apply -f --
Replace
<key_file_name>with the name and location of your SSH key pair file. -
Optional: Only include the
--from-file=authorized_keysoption for bare-metal nodes that must be provisioned when creating the data plane.
-
Replace
If you are creating Compute nodes, create a secret for migration.
Create the SSH key pair for instance migration:
$ ssh-keygen -f ./nova-migration-ssh-key -t ecdsa-sha2-nistp521 -N ''Create the
SecretCR for migration and apply it to the cluster:$ oc create secret generic nova-migration-ssh-key \ --save-config \ --from-file=ssh-privatekey=nova-migration-ssh-key \ --from-file=ssh-publickey=nova-migration-ssh-key.pub \ -n openstack \ -o yaml | oc apply -f -
For nodes that have not been registered to the Red Hat Customer Portal, create the
SecretCR for subscription-manager credentials to register the nodes:$ oc create secret generic subscription-manager \ --from-literal rhc_auth='{"login": {"username": "<subscription_manager_username>", "password": "<subscription_manager_password>"}}'-
Replace
<subscription_manager_username>with the username you set forsubscription-manager. -
Replace
<subscription_manager_password>with the password you set forsubscription-manager.
-
Replace
Create a
SecretCR that contains the Red Hat registry credentials:$ oc create secret generic redhat-registry --from-literal edpm_container_registry_logins='{"registry.redhat.io": {"<username>": "<password>"}}'Replace
<username>and<password>with your Red Hat registry username and password credentials.For information about how to create your registry service account, see the Knowledge Base article Creating Registry Service Accounts.
If you are creating Compute nodes, create a secret for libvirt.
Create a file on your workstation named
secret_libvirt.yamlto define the libvirt secret:apiVersion: v1 kind: Secret metadata: name: libvirt-secret namespace: openstack type: Opaque data: LibvirtPassword: <base64_password>Replace
<base64_password>with a base64-encoded string with maximum length 63 characters. You can use the following command to generate a base64-encoded password:$ echo -n <password> | base64TipIf you do not want to base64-encode the username and password, you can use the
stringDatafield instead of thedatafield to set the username and password.
Create the
SecretCR:$ oc apply -f secret_libvirt.yaml -n openstack
Verify that the
SecretCRs are created:$ oc describe secret dataplane-ansible-ssh-private-key-secret $ oc describe secret nova-migration-ssh-key $ oc describe secret subscription-manager $ oc describe secret redhat-registry $ oc describe secret libvirt-secret
5.3. Creating an OpenStackDataPlaneNodeSet CR for a set of Networker nodes with pre-provisioned nodes Copy linkLink copied to clipboard!
You can define an OpenStackDataPlaneNodeSet CR for each logical grouping of pre-provisioned nodes in your data plane that are Networker nodes. You can define as many node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet CR.
You use the nodeTemplate field to configure the common properties to apply to all nodes in an OpenStackDataPlaneNodeSet CR, and the nodes field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate.
For an example OpenStackDataPlaneNodeSet CR that configures a set of pre-provisioned Networker nodes, see Example OpenStackDataPlaneNodeSet CR for pre-provisioned Networker nodes.
If you want to use OVS-DPDK on a set of pre-provisioned Networker nodes, you must use a different configuration in the OpenStackDataPlaneNodeSet CR. For an example, see Example OpenStackDataPlaneNodeSet CR for pre-provisioned Networker nodes with DPDK.
Procedure
Create a file on your workstation named
openstack_preprovisioned_networker_node_set.yamlto define theOpenStackDataPlaneNodeSetCR:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: networker-nodes namespace: openstack spec: env: - name: ANSIBLE_FORCE_COLOR value: "True"-
name- TheOpenStackDataPlaneNodeSetCR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters. If necessary, replace the example namenetworker-nodeswith a name that more accurately describes your node set. -
env- Optional: a list of environment variables to pass to the pod.
-
Include the
servicesfield to override the default services. Remove thenova,libvirt, and other services that are not required by a Networker node:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: networker-nodes namespace: openstack spec: ... services: - redhat - bootstrap - download-cache - reboot-os - configure-ovs-dpdk - configure-network - validate-network - install-os - configure-os - ssh-known-hosts - run-os - install-certs - ovn - neutron-metadata - neutron-dhcp-
configure-ovs-dpdk- Theconfigure-ovs-dpdkservice is required only when DPDK nics are used in the deployment. -
neutron-metadata- Theneutron-metadataservice is required only when SR-IOV ports are used in the deployment. -
neutron-dhcp- You can optionally run theneutron-dhcpservice on your Networker nodes. You might not need to useneutron-dhcpwith OVN if your deployment uses DHCP relays, or advanced DHCP options that are supported bydnsmasqbut not by the OVN DHCP implementation. .
-
Connect the data plane to the control plane network:
spec: ... networkAttachments: - ctlplaneEnable the chassis as gateway:
spec: ... nodeTemplate: ansible: ... edpm_enable_chassis_gw: trueSpecify that the nodes in this set are pre-provisioned:
spec: ... nodeTemplate: ansible: ... edpm_enable_chassis_gw: true ... preProvisioned: trueAdd the SSH key secret that you created so that Ansible can connect to the data plane nodes:
nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key>-
Replace
<secret-key>with the name of the SSH keySecretCR you created for this node set in <link>[Creating the data plane secrets], for example,dataplane-ansible-ssh-private-key-secret.
-
Replace
-
Create a Persistent Volume Claim (PVC) in the
openstacknamespace on your Red Hat OpenShift Container Platform (RHOCP) cluster to store logs. Set thevolumeModetoFilesystemandaccessModestoReadWriteOnce. Do not request storage for logs from a PersistentVolume (PV) that uses the NFS volume plugin. NFS is incompatible with FIFO and theansible-runnercreates a FIFO file to store logs. For information about PVCs, see Understanding persistent storage in the RHOCP Storage guide and Red Hat OpenShift Container Platform cluster requirements in Planning your deployment. Enable persistent logging for the Networker nodes:
nodeTemplate: ... extraMounts: - extraVolType: Logs volumes: - name: ansible-logs persistentVolumeClaim: claimName: <pvc_name> mounts: - name: ansible-logs mountPath: "/runner/artifacts"-
Replace
<pvc_name>with the name of the PVC storage on your RHOCP cluster.
-
Replace
Specify the management network:
nodeTemplate: ... managementNetwork: ctlplaneSpecify the
SecretCRs used to source the usernames and passwords to register the operating system of the nodes that are not registered to the Red Hat Customer Portal, and enable repositories for your nodes. The following example demonstrates how to register your nodes to Red Hat Content Delivery Network (CDN). For information about how to register your nodes with Red Hat Satellite 6.13, see Managing Hosts.nodeTemplate: ... ansible: ansibleUser: cloud-admin ansiblePort: 22 ansibleVarsFrom: - secretRef: name: subscription-manager - secretRef: name: redhat-registry ansibleVars: rhc_release: 9.4 rhc_repositories: - {name: "*", state: disabled} - {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled} - {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled} - {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled} - {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled} - {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled} - {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled} edpm_bootstrap_release_version_package: []-
ansibleUser- The user associated with the secret you created in <link>[Creating the data plane secrets]. ansibleVars- The Ansible variables that customize the set of nodes. For a list of Ansible variables that you can use, see https://openstack-k8s-operators.github.io/edpm-ansible/.For a complete list of the Red Hat Customer Portal registration commands, see https://access.redhat.com/solutions/253273. For information about how to log into
registry.redhat.io, see https://access.redhat.com/RegistryAuthentication#creating-registry-service-accounts-6.
-
Add the network configuration template to apply to your Networker nodes.
nodeTemplate: ... ansible: ... ansibleVars: ... nodes: ... neutron_physical_bridge_name: br-ex neutron_public_interface_name: eth0 edpm_network_config_nmstate: true edpm_network_config_update: false-
edpm_network_config_nmstate- Sets theos-net-configprovider tonmstate. The default value istrue. Change it tofalseonly if a specific limitation of thenmstateprovider requires you to use theifcfgprovider. For more information on advantages and limitations of thenmstateprovider, see https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/planning_your_deployment/plan-networks_planning#plan-os-net-config_plan-network in Planning your deployment. edpm_network_config_update- When deploying a node set for the first time, set theedpm_network_config_updatevariable tofalse. If you later modifyedpm_network_config_template, first setedpm_network_config_updatetotrue. After you complete the update, reset it tofalse.ImportantAfter an
edpm_network_config_templateupdate, you must resetedpm_network_config_updatetofalse. Otherwise, the nodes could lose network access. Wheneveredpm_network_config_updateistrue, the updated network configuration is reapplied every time anOpenStackDataPlaneDeploymentCR is created that includes theconfigure-networkservice that is a member of theservicesOverridelist.The following example applies a VLANs network configuration to a set of the data plane Networker nodes with DPDK:
edpm_network_config_template: | ... {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_user_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: ovs_dpdk_port name: dpdk0 members: - type: interface name: nic1 - type: linux_bond name: bond_api use_dhcp: false bonding_options: "mode=active-backup" dns_servers: {{ ctlplane_dns_nameservers }} members: - type: interface name: nic2 primary: true - type: vlan vlan_id: {{ lookup('vars', networks_lower['internalapi'] ~ '_vlan_id') }} device: bond_api addresses: - ip_netmask: {{ lookup('vars', networks_lower['internalapi'] ~ '_ip') }}/{{ lookup('vars', networks_lower['internalapi'] ~ '_cidr') }} - type: ovs_user_bridge name: br-link0 use_dhcp: false ovs_extra: "set port br-link0 tag={{ lookup('vars', networks_lower['tenant'] ~ '_vlan_id') }}" addresses: - ip_netmask: {{ lookup('vars', networks_lower['tenant'] ~ '_ip') }}/{{ lookup('vars', networks_lower['tenant'] ~ '_cidr')}} members: - type: ovs_dpdk_bond name: dpdkbond0 mtu: 9000 rx_queue: 1 ovs_extra: "set port dpdkbond0 bond_mode=balance-slb" members: - type: ovs_dpdk_port name: dpdk1 members: - type: interface name: nic3 - type: ovs_dpdk_port name: dpdk2 members: - type: interface name: nic4 - type: ovs_user_bridge name: br-link1 use_dhcp: false members: - type: ovs_dpdk_bond name: dpdkbond1 mtu: 9000 rx_queue: 1 ovs_extra: "set port dpdkbond1 bond_mode=balance-slb" members: - type: ovs_dpdk_port name: dpdk3 members: - type: interface name: nic5 - type: ovs_dpdk_port name: dpdk4 members: - type: interface name: nic6 neutron_physical_bridge_name: br-exThe following example applies a VLANs network configuration to a set of data plane Networker nodes without DPDK:
edpm_network_config_template: | …--- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic2 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: >- {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %}For more information about data plane network configuration, see Customizing data plane networks in Configuring network services.
-
-
Add the common configuration for the set of nodes in this group under the
nodeTemplatesection. Each node in thisOpenStackDataPlaneNodeSetinherits this configuration. For information about the properties you can use to configure common node attributes, seeOpenStackDataPlaneNodeSetCR spec properties in the Deploying Red Hat OpenStack Services on OpenShift guide. Define each node in this node set:
... nodes: edpm-networker-0: hostName: networker-0 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.100 - name: internalapi subnetName: subnet1 fixedIP: 172.17.0.100 - name: storage subnetName: subnet1 fixedIP: 172.18.0.100 - name: tenant subnetName: subnet1 fixedIP: 172.19.0.100 ansible: ansibleHost: 192.168.122.100 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-networker-0.example.com edpm-networker-1: hostName: edpm-networker-1 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.101 - name: internalapi subnetName: subnet1 fixedIP: 172.17.0.101 - name: storage subnetName: subnet1 fixedIP: 172.18.0.101 - name: tenant subnetName: subnet1 fixedIP: 172.19.0.101 ansible: ansibleHost: 192.168.122.101 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-networker-1.example.com-
edpm-networker-0- The node definition reference, for example,edpm-networker-0. Each node in the node set must have a node definition. -
networks- Defines the IPAM and the DNS records for the node. -
fixedIP- Specifies a predictable IP address for the network that must be in the allocation range defined for the network in theNetConfigCR. ansibleVars- Node-specific Ansible variables that customize the node.Note-
Nodes defined within the
nodessection can configure the same Ansible variables that are configured in thenodeTemplatesection. Where an Ansible variable is configured for both a specific node and within thenodeTemplatesection, the node-specific values override those from thenodeTemplatesection. -
You do not need to replicate all the
nodeTemplateAnsible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node. -
When you define the networkData Secret for an individual node (such as
edpm-compute-0), it acts as a complete override rather than a supplemental configuration. Because node-specific configurations override the inherited default values from thenodeTemplatesection, you must ensure that your node-specificnetworkDataSecret contains the full set of required network configurations for that node, not just the unique values. -
Many
ansibleVarsincludeedpmin the name, which stands for "External Data Plane Management".
For information about the properties you can use to configure node attributes, see
OpenStackDataPlaneNodeSetCR spec properties in the Deploying Red Hat OpenStack Services on OpenShift guide..-
Nodes defined within the
-
-
Save the
openstack_preprovisioned_networker_node_set.yamldefinition file. Create the data plane resources:
$ oc create --save-config -f openstack_preprovisioned_networker_node_set.yaml -n openstackVerify that the data plane resources have been created by confirming that the status is
SetupReady:$ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10mWhen the status is
SetupReadythe command returns acondition metmessage, otherwise it returns a timeout error.For information about the data plane conditions and states, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.
Verify that the
Secretresource was created for the node set:$ oc get secret | grep openstack-data-plane dataplanenodeset-openstack-data-plane Opaque 1 3m50sVerify the services were created:
$ oc get openstackdataplaneservice -n openstack NAME AGE bootstrap 46m ceph-client 46m ceph-hci-pre 46m configure-network 46m configure-os 46m ...
5.3.1. Example OpenStackDataPlaneNodeSet CR for pre-provisioned Networker nodes Copy linkLink copied to clipboard!
The following example OpenStackDataPlaneNodeSet CR creates a node set from pre-provisioned Networker nodes with some node-specific configuration. The pre-provisioned Networker nodes are provisioned when the node set is created. The example includes optional fields. Review the example and update the optional fields to the correct values for your environment or remove them before using the example in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment.
Update the name of the OpenStackDataPlaneNodeSet CR in this example to a name that reflects the nodes in the set. The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters.
apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
metadata:
name: openstack-networker-nodes
namespace: openstack
spec:
services:
- bootstrap
- download-cache
- reboot-os
- configure-network
- validate-network
- install-os
- configure-os
- ssh-known-hosts
- run-os
- install-certs
- ovn
env:
- name: ANSIBLE_FORCE_COLOR
value: "True"
networkAttachments:
- ctlplane
preProvisioned: true
nodeTemplate:
ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret
extraMounts:
- extraVolType: Logs
volumes:
- name: ansible-logs
persistentVolumeClaim:
claimName: <pvc_name>
mounts:
- name: ansible-logs
mountPath: "/runner/artifacts"
managementNetwork: ctlplane
ansible:
ansibleUser: cloud-admin
ansiblePort: 22
ansibleVarsFrom:
- secretRef:
name: subscription-manager
- secretRef:
name: redhat-registry
ansibleVars:
edpm_bootstrap_command: |
set -e
rhc_release: 9.4
rhc_repositories:
- {name: "*", state: disabled}
- {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled}
- {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled}
- {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled}
- {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled}
- {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled}
- {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled}
edpm_bootstrap_release_version_package: []
...
neutron_physical_bridge_name: br-ex
edpm_network_config_template: |
---
{% set mtu_list = [ctlplane_mtu] %}
{% for network in nodeset_networks %}
{{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
{%- endfor %}
{% set min_viable_mtu = mtu_list | max %}
network_config:
- type: ovs_bridge
name: {{ neutron_physical_bridge_name }}
mtu: {{ min_viable_mtu }}
use_dhcp: false
dns_servers: {{ ctlplane_dns_nameservers }}
domain: {{ dns_search_domains }}
addresses:
- ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
routes: {{ ctlplane_host_routes }}
members:
- type: interface
name: nic1
mtu: {{ min_viable_mtu }}
# force the MAC address of the bridge to this interface
primary: true
{% for network in nodeset_networks %}
- type: vlan
mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
addresses:
- ip_netmask:
{{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
{% endfor %}
nodes:
edpm-networker-0:
hostName: edpm-networker-0
networks:
- name: ctlplane
subnetName: subnet1
defaultRoute: true
fixedIP: 192.168.122.100
- name: internalapi
subnetName: subnet1
fixedIP: 172.17.0.100
- name: storage
subnetName: subnet1
fixedIP: 172.18.0.100
- name: tenant
subnetName: subnet1
fixedIP: 172.19.0.100
ansible:
ansibleHost: 192.168.122.100
ansibleUser: cloud-admin
ansibleVars:
fqdn_internal_api: edpm-networker-0.example.com
edpm-networker-1:
hostName: edpm-networker-1
networks:
- name: ctlplane
subnetName: subnet1
defaultRoute: true
fixedIP: 192.168.122.101
- name: internalapi
subnetName: subnet1
fixedIP: 172.17.0.101
- name: storage
subnetName: subnet1
fixedIP: 172.18.0.101
- name: tenant
subnetName: subnet1
fixedIP: 172.19.0.101
ansible:
ansibleHost: 192.168.122.101
ansibleUser: cloud-admin
ansibleVars:
fqdn_internal_api: edpm-networker-1.example.com
5.3.2. Example OpenStackDataPlaneNodeSet CR for pre-provisioned Networker nodes with DPDK Copy linkLink copied to clipboard!
The following example OpenStackDataPlaneNodeSet CR creates a node set from pre-provisioned Networker nodes with OVS-DPDK and some node-specific configuration. The pre-provisioned Networker nodes with OVS-DPDK are provisioned when the node set is created. The example includes optional fields. Review the example and update the optional fields to the correct values for your environment or remove them before using the example in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment.
Update the name of the OpenStackDataPlaneNodeSet CR in this example to a name that reflects the nodes in the set. The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters.
apiVersion: v1
kind: ConfigMap
metadata:
name: networker-nodeset-values
annotations:
config.kubernetes.io/local-config: "true"
data:
root_password: cmVkaGF0Cg==
preProvisioned: false
baremetalSetTemplate:
ctlplaneInterface: <control plane interface>
cloudUserName: cloud-admin
provisioningInterface: <provisioning network interface>
bmhLabelSelector:
app: openstack-networker
passwordSecret:
name: baremetalset-password-secret
namespace: openstack
ssh_keys:
# Authorized keys that will have access to the dataplane networkers via SSH
authorized: <authorized key>
# The private key that will have access to the dataplane networkers via SSH
private: <private key>
# The public key that will have access to the dataplane networkers via SSH
public: <public key>
nodeset:
ansible:
ansibleUser: cloud-admin
ansiblePort: 22
ansibleVars:
edpm_enable_chassis_gw: true
...
ansibleVarsFrom:
- secretRef:
name: subscription-manager
- secretRef:
name: redhat-registry
ansibleVars:
edpm_bootstrap_command: |
set -e
rhc_release: 9.4
rhc_repositories:
- {name: "*", state: disabled}
- {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled}
- {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled}
- {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled}
- {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled}
- {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled}
- {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled}
edpm_bootstrap_release_version_package: []
...
edpm_network_config_template: |
...
{% set mtu_list = [ctlplane_mtu] %}
{% for network in nodeset_networks %}
{{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
{%- endfor %}
{% set min_viable_mtu = mtu_list | max %}
network_config:
- type: interface
name: nic1
use_dhcp: false
- type: interface
name: nic2
use_dhcp: false
- type: ovs_user_bridge
name: {{ neutron_physical_bridge_name }}
mtu: {{ min_viable_mtu }}
use_dhcp: false
dns_servers: {{ ctlplane_dns_nameservers }}
domain: {{ dns_search_domains }}
addresses:
- ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
routes: {{ ctlplane_host_routes }}
members:
- type: ovs_dpdk_port
rx_queue: 1
name: dpdk0
members:
- type: interface
name: nic3
# These vars are for the network config templates themselves and are
# considered EDPM network defaults.
neutron_physical_bridge_name: br-ex
neutron_public_interface_name: nic1
# edpm_nodes_validation
edpm_nodes_validation_validate_controllers_icmp: false
edpm_nodes_validation_validate_gateway_icmp: false
dns_search_domains: []
gather_facts: false
# edpm firewall, change the allowed CIDR if needed
edpm_sshd_configure_firewall: true
edpm_sshd_allowed_ranges:
- 192.168.122.0/24
networks:
- defaultRoute: true
name: ctlplane
subnetName: subnet1
- name: internalapi
subnetName: subnet1
- name: storage
subnetName: subnet1
- name: tenant
subnetName: subnet1
nodes:
edpm-networker-0:
hostName: edpm-networker-0
services:
- bootstrap
- download-cache
- reboot-os
- configure-ovs-dpdk
- configure-network
- validate-network
- install-os
- configure-os
- ssh-known-hosts
- run-os
- install-certs
- ovn
- neutron-metadata
5.4. Creating an OpenStackDataPlaneNodeSet CR for a set of Networker nodes with unprovisioned nodes Copy linkLink copied to clipboard!
To create Networker nodes with unprovisioned nodes, you must perform the following tasks:
-
Create a
BareMetalHostcustom resource (CR) for each bare-metal Networker node. -
Define an
OpenStackDataPlaneNodeSetCR for the Networker nodes.
5.4.1. Prerequisites Copy linkLink copied to clipboard!
- Your RHOCP cluster supports provisioning bare-metal nodes.
- Your Cluster Baremetal Operator (CBO) is configured for provisioning.
5.4.2. Creating the BareMetalHost CRs for unprovisioned Networker nodes Copy linkLink copied to clipboard!
You must create a BareMetalHost custom resource (CR) for each bare-metal Networker node. At a minimum, you must provide the data required to add the bare-metal Networker node on the network so that the remaining installation steps can access the node and perform the configuration.
If you use the ctlplane interface for provisioning, to avoid the kernel rp_filter logic from dropping traffic, configure the DHCP service to use an address range different from the ctlplane address range. This ensures that the return traffic remains on the machine network interface.
Procedure
The Bare Metal Operator (BMO) manages
BareMetalHostcustom resources (CRs) in theopenshift-machine-apinamespace by default. Update theProvisioningCR to watch all namespaces:$ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"watchAllNamespaces": true }}'If you are using virtual media boot for bare-metal Networker nodes and the nodes are not connected to a provisioning network, you must update the
ProvisioningCR to enablevirtualMediaViaExternalNetwork, which enables bare-metal connectivity through the external network:$ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"virtualMediaViaExternalNetwork": true }}'Create a file on your workstation that defines the
SecretCR with the credentials for accessing the Baseboard Management Controller (BMC) of each bare-metal Networker node in the node set:apiVersion: v1 kind: Secret metadata: name: edpm-networker-0-bmc-secret namespace: openstack type: Opaque data: username: <base64_username> password: <base64_password>Replace
<base64_username>and<base64_password>with strings that are base64-encoded. You can use the following command to generate a base64-encoded string:$ echo -n <string> | base64TipIf you do not want to base64-encode the username and password, you can use the
stringDatafield instead of thedatafield to set the username and password.
Create a file named
bmh_networker_nodes.yamlon your workstation, that defines theBareMetalHostCR for each bare-metal Networker node. The following example creates aBareMetalHostCR with the provisioning method Redfish virtual media:apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: edpm-networker-0 namespace: openstack labels: app: openstack-networker workload: networker spec: ... bmc: address: redfish-virtualmedia+http://192.168.111.1:8000/redfish/v1/Systems/e8efd888-f844-4fe0-9e2e-498f4ab7806d credentialsName: edpm-networker-0-bmc-secret bootMACAddress: 00:c7:e4:a7:e7:f3 bootMode: UEFI online: false [preprovisioningNetworkDataName: <network_config_secret_name>]-
labels- Metadata labels, such asapp,workload, andnodeNameare key-value pairs that provide varying levels of granularity for labelling nodes. You can use these labels when you create anOpenStackDataPlaneNodeSetCR to describe the configuration of bare-metal nodes to be provisioned or to define nodes in a node set. -
address- The URL for communicating with the node’s BMC controller. For information about BMC addressing for other provisioning methods, see BMC addressing in the RHOCP Deploying installer-provisioned clusters on bare metal guide. -
credentialsName- The name of theSecretCR you created in the previous step for accessing the BMC of the node. preprovisioningNetworkDataName- Optional: The name of the network configuration secret in the local namespace to pass to the pre-provisioning image. The network configuration must be innmstateformat.For more information about how to create a
BareMetalHostCR, see About the BareMetalHost resource in the RHOCP documentation.
-
Create the
BareMetalHostresources:$ oc create -f bmh_networker_nodes.yamlVerify that the
BareMetalHostresources have been created and are in theAvailablestate:$ oc get bmh NAME STATE CONSUMER ONLINE ERROR AGE edpm-networker-0 Available openstack-edpm true 2d21h edpm-networker-1 Available openstack-edpm true 2d21h ...
5.4.3. Creating an OpenStackDataPlaneNodeSet CR for a set of Networker nodes with unprovisioned nodes Copy linkLink copied to clipboard!
Define an OpenStackDataPlaneNodeSet custom resource (CR) for a group of Networker nodes. You can define as many node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet CR.
You use the nodeTemplate field to configure the common properties to apply to all nodes in an OpenStackDataPlaneNodeSet CR, and the nodeTemplate.nodes field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate.
For an example OpenStackDataPlaneNodeSet CR that creates a node set from unprovisioned Networker nodes, see Example node set CR for unprovisioned Networker nodes with OVS-DPDK.
Prerequisites
-
A
BareMetalHostCR is created for each unprovisioned node that you want to include in each node set. For more information, see Creating theBareMetalHostCRs for unprovisioned nodes.
Procedure
Create a file on your workstation named
openstack_unprovisioned_node_set.yamlto define theOpenStackDataPlaneNodeSetCR:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-data-plane namespace: openstack spec: tlsEnabled: true env: - name: ANSIBLE_FORCE_COLOR value: "True"**-
name- TheOpenStackDataPlaneNodeSetCR name must be unique, contain only lower case alphanumeric characters and-(hyphens) or.(periods), start and end with an alphanumeric character, and have a maximum length of 53 characters. Update the name in this example to a name that reflects the nodes in the set. -
env- Optional: a list of environment variables to pass to the pod.
-
Connect the data plane to the control plane network:
spec: ... networkAttachments: - ctlplaneSpecify that the nodes in this set are unprovisioned and must be provisioned when creating the resource:
preProvisioned: falseDefine the
baremetalSetTemplatefield to describe the configuration of the bare-metal nodes that must be provisioned when creating the resource:baremetalSetTemplate: deploymentSSHSecret: dataplane-ansible-ssh-private-key-secret bmhNamespace: <bmh_namespace> cloudUserName: <ansible_ssh_user> bmhLabelSelector: app: <bmh_label> ctlplaneInterface: <interface>-
Replace
<bmh_namespace>with the namespace defined in the correspondingBareMetalHostCR for the node, for example,openshift-machine-api. -
Replace
<ansible_ssh_user>with the username of the Ansible SSH user, for example,cloud-admin. -
Replace
<bmh_label>with the label defined in the correspondingBareMetalHostCR for the node, for example,openstack-networker. Metadata labels, such asapp,workload, andnodeNameare key-value pairs that provide varying levels of granularity for labelling nodes. Set thebmhLabelSelectorfield to select data plane nodes based on labels that match the labels in the correspondingBareMetalHostCR. -
Replace
<interface>with the control plane interface the node connects to, for example,enp6s0.
-
Replace
If you created a custom
OpenStackProvisionServerCR, add it to yourbaremetalSetTemplatedefinition:baremetalSetTemplate: ... provisionServerName: my-os-provision-serverAdd the SSH key secret that you created to enable Ansible to connect to the data plane nodes:
nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key>-
Replace
<secret-key>with the name of the SSH keySecretCR you created in <link>[Creating the data plane secrets], for example,dataplane-ansible-ssh-private-key-secret.
-
Replace
-
Create a Persistent Volume Claim (PVC) in the
openstacknamespace on your Red Hat OpenShift Container Platform (RHOCP) cluster to store logs. Set thevolumeModetoFilesystemandaccessModestoReadWriteOnce. Do not request storage for logs from a PersistentVolume (PV) that uses the NFS volume plugin. NFS is incompatible with FIFO and theansible-runnercreates a FIFO file to write to store logs. For information about PVCs, see Understanding persistent storage in the RHOCP Storage guide and Red Hat OpenShift Container Platform cluster requirements in Planning your deployment. Enable persistent logging for the data plane nodes:
nodeTemplate: ... extraMounts: - extraVolType: Logs volumes: - name: ansible-logs persistentVolumeClaim: claimName: <pvc_name> mounts: - name: ansible-logs mountPath: "/runner/artifacts"-
Replace
<pvc_name>with the name of the PVC storage on your RHOCP cluster.
-
Replace
Specify the management network:
nodeTemplate: ... managementNetwork: ctlplaneSpecify the
SecretCRs used to source the usernames and passwords to register the operating system of the nodes that are not registered to the Red Hat Customer Portal, and enable repositories for your nodes. The following example demonstrates how to register your nodes to Red Hat Content Delivery Network (CDN). For information about how to register your nodes with Red Hat Satellite 6.13, see Managing Hosts.nodeTemplate: ansible: ansibleUser: cloud-admin ansiblePort: 22 ansibleVarsFrom: - secretRef: name: subscription-manager - secretRef: name: redhat-registry ansibleVars: rhc_release: 9.4 rhc_repositories: - {name: "*", state: disabled} - {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled} - {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled} - {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled} - {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled} - {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled} - {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled} edpm_bootstrap_release_version_package: []-
ansibleUser- The user associated with the secret you created in <link>[Creating the data plane secrets]. ansibleVars- The Ansible variables that customize the set of nodes. For a list of Ansible variables that you can use, see https://openstack-k8s-operators.github.io/edpm-ansible/.For a complete list of the Red Hat Customer Portal registration commands, see https://access.redhat.com/solutions/253273. For information about how to log into
registry.redhat.io, see https://access.redhat.com/RegistryAuthentication#creating-registry-service-accounts-6.
-
Add the network configuration template to apply to your data plane nodes.
nodeTemplate: ... ansible: ... ansiblePort: 22 ansibleUser: cloud-admin ansibleVars: ... edpm_enable_chassis_gw: true edpm_network_config_nmstate: true ... neutron_physical_bridge_name: br-ex neutron_public_interface_name: eth0 edpm_network_config_update: falseedpm_network_config_update- When deploying a node set for the first time, ensure that theedpm_network_config_updatevariable is set tofalse. If you later modifyedpm_network_config_template, first setedpm_network_config_updatetotrue. Reset it tofalseafter the update.ImportantAfter an
edpm_network_config_templateupdate, you must resetedpm_network_config_updatetofalse. Otherwise, the nodes could lose network access. Wheneveredpm_network_config_updateistrue, the updated network configuration is reapplied every time anOpenStackDataPlaneDeploymentCR is created that includes theconfigure-networkservice that is a member of theservicesOverridelist.The following example applies a VLANs network configuration to a set of the data plane Networker nodes with DPDK:
edpm_network_config_template: | ... {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_user_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: ovs_dpdk_port driver: mlx5_core name: dpdk0 mtu: {{ min_viable_mtu }} members: - type: sriov_vf device: nic6 vfid: 0 - type: interface name: nic1 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %}The following example applies a VLANs network configuration to a set of data plane Networker nodes without DPDK:
edpm_network_config_template: | …--- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic2 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: >- {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %}For more information about data plane network configuration, see Customizing data plane networks in Configuring network services.
-
Add the common configuration for the set of nodes in this group under the
nodeTemplatesection. Each node in thisOpenStackDataPlaneNodeSetinherits this configuration. For information about the properties you can use to configure common node attributes, seeOpenStackDataPlaneNodeSetCR spec properties in the Deploying Red Hat OpenStack Services on OpenShift guide. Define each node in this node set:
nodes: edpm-networker-0: hostName: networker-0 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.100 - name: internalapi subnetName: subnet1 fixedIP: 172.17.0.100 - name: storage subnetName: subnet1 fixedIP: 172.18.0.100 - name: tenant subnetName: subnet1 fixedIP: 172.19.0.100 ansible: ansibleHost: 192.168.122.100 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-networker-0.example.com bmhLabelSelector: nodeName: edpm-networker-0 edpm-networker-1: hostName: edpm-networker-1 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.101 - name: internalapi subnetName: subnet1 fixedIP: 172.17.0.101 - name: storage subnetName: subnet1 fixedIP: 172.18.0.101 - name: tenant subnetName: subnet1 fixedIP: 172.19.0.101 ansible: ansibleHost: 192.168.122.101 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-networker-1.example.com bmhLabelSelector: nodeName: edpm-networker-1-
edpm-networker-0- The node definition reference, for example,edpm-networker-0. Each node in the node set must have a node definition. -
networks- Defines the IPAM and the DNS records for the node. -
fixedIP- Specifies a predictable IP address for the network that must be in the allocation range defined for the network in theNetConfigCR. -
bmhLabelSelector- Optional: TheBareMetalHostCR metadata label that selects theBareMetalHostCR for the data plane node. The label can be any label that is defined for theBareMetalHostCR. The label is used with thebmhLabelSelectorlabel configured in thebaremetalSetTemplatedefinition to select theBareMetalHostfor the node.
Note-
Nodes defined within the
nodessection can configure the same Ansible variables that are configured in thenodeTemplatesection. Where an Ansible variable is configured for both a specific node and within thenodeTemplatesection, the node-specific values override those from thenodeTemplatesection. -
You do not need to replicate all the
nodeTemplateAnsible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node. -
Many
ansibleVarsincludeedpmin the name, which stands for "External Data Plane Management".
+ For information about the properties you can use to configure common node attributes, see
OpenStackDataPlaneNodeSetCR spec properties in the Deploying Red Hat OpenStack Services on OpenShift guide.-
-
Save the
openstack_unprovisioned_node_set.yamldefinition file. Create the data plane resources:
$ oc create --save-config -f openstack_unprovisioned_node_set.yaml -n openstackVerify that the data plane resources have been created by confirming that the status is
SetupReady:$ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10mWhen the status is
SetupReadythe command returns acondition metmessage, otherwise it returns a timeout error.For information about the data plane conditions and states, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.
Verify that the
Secretresource was created for the node set:$ oc get secret -n openstack | grep openstack-data-plane dataplanenodeset-openstack-data-plane Opaque 1 3m50sVerify that the nodes have transitioned to the
provisionedstate:$ oc get bmh NAME STATE CONSUMER ONLINE ERROR AGE edpm-networker-0 provisioned openstack-data-plane true 3d21hVerify that the services were created:
$ oc get openstackdataplaneservice -n openstack NAME AGE bootstrap 8m40s ceph-client 8m40s ceph-hci-pre 8m40s configure-network 8m40s configure-os 8m40s ...
5.4.4. Example node set CR for unprovisioned Networker nodes with OVS-DPDK Copy linkLink copied to clipboard!
The following example OpenStackDataPlaneNodeSet CR creates a node set from unprovisioned Networker nodes with OVS-DPDK and some node-specific configuration. The unprovisioned Networker nodes are provisioned when the node set is created. Update the name of the OpenStackDataPlaneNodeSet CR in this example to a name that reflects the nodes in the set. The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters.
apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
metadata:
name: networker-nodes
namespace: openstack
services:
- redhat
- bootstrap
- download-cache
- reboot-os
- configure-ovs-dpdk
- configure-network
- validate-network
- install-os
- configure-os
- ssh-known-hosts
- run-os
- install-certs
- ovn
- neutron-metadata
nodeTemplate:
ansible:
ansibleVars:
edpm_enable_chassis_gw: true
edpm_kernel_args: default_hugepagesz=1GB hugepagesz=1G hugepages=64 iommu=pt
intel_iommu=on tsx=off isolcpus=2-47,50-95
edpm_network_config_nmstate: true
...
edpm_network_config_template: |
...
{% set mtu_list = [ctlplane_mtu] %}
{% for network in nodeset_networks %}
{{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
{%- endfor %}
{% set min_viable_mtu = mtu_list | max %}
network_config:
- type: interface
name: nic1
use_dhcp: false
- type: sriov_pf
name: nic6
mtu: 9000
numvfs: 2
use_dhcp: false
defroute: false
nm_controlled: true
hotplug: true
promisc: false
- type: ovs_user_bridge
name: {{ neutron_physical_bridge_name }}
mtu: {{ min_viable_mtu }}
use_dhcp: false
dns_servers: {{ ctlplane_dns_nameservers }}
domain: {{ dns_search_domains }}
addresses:
- ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
routes: {{ ctlplane_host_routes }}
members:
- type: ovs_dpdk_port
driver: mlx5_core
name: dpdk0
mtu: {{ min_viable_mtu }}
members:
- type: sriov_vf
device: nic6
vfid: 0
- type: linux_bond
name: bond_api
use_dhcp: false
bonding_options: "mode=active-backup"
dns_servers: {{ ctlplane_dns_nameservers }}
members:
- type: sriov_vf
device: nic6
driver: mlx5_core
mtu: {{ min_viable_mtu }}
spoofcheck: false
promisc: false
vfid: 1
primary: true
- type: vlan
vlan_id: {{ lookup('vars', networks_lower['internalapi'] ~ '_vlan_id') }}
device: bond_api
addresses:
- ip_netmask: {{ lookup('vars', networks_lower['internalapi'] ~ '_ip') }}/{{ lookup('vars', networks_lower['internalapi'] ~ '_cidr') }}
- type: ovs_user_bridge
name: br-link0
use_dhcp: false
ovs_extra: "set port br-link0 tag={{ lookup('vars', networks_lower['tenant'] ~ '_vlan_id') }}"
addresses:
- ip_netmask: {{ lookup('vars', networks_lower['tenant'] ~ '_ip') }}/{{ lookup('vars', networks_lower['tenant'] ~ '_cidr')}}
members:
- type: ovs_dpdk_bond
name: dpdkbond0
mtu: 9000
rx_queue: 1
ovs_extra: "set port dpdkbond0 bond_mode=balance-slb"
members:
- type: ovs_dpdk_port
name: dpdk1
members:
- type: interface
name: nic4
- type: ovs_dpdk_port
name: dpdk2
members:
- type: interface
name: nic5
- type: ovs_user_bridge
name: br-link1
use_dhcp: false
members:
- type: ovs_dpdk_bond
name: dpdkbond1
mtu: 9000
rx_queue: 1
ovs_extra: "set port dpdkbond1 bond_mode=balance-slb"
members:
- type: ovs_dpdk_port
name: dpdk3
members:
- type: interface
name: nic2
- type: ovs_dpdk_port
name: dpdk4
members:
- type: interface
name: nic3
edpm_ovn_bridge_mappings:
- access:br-ex
- dpdkmgmt:br-link0
- dpdkdata0:br-link1
edpm_ovs_dpdk_memory_channels: 4
edpm_ovs_dpdk_pmd_core_list: 2,3,50,51
edpm_ovs_dpdk_socket_memory: 4096,4096
edpm_tuned_isolated_cores: 2-47,50-95
edpm_tuned_profile: cpu-partitioning
neutron_physical_bridge_name: br-ex
neutron_public_interface_name: eth0
5.5. Deploying the data plane Copy linkLink copied to clipboard!
You use the OpenStackDataPlaneDeployment CRD to configure the services on the data plane nodes and deploy the data plane. You control the execution of Ansible on the data plane by creating OpenStackDataPlaneDeployment custom resources (CRs). Each OpenStackDataPlaneDeployment CR models a single Ansible execution. When the OpenStackDataPlaneDeployment successfully completes execution, it does not automatically execute the Ansible again, even if the OpenStackDataPlaneDeployment or related OpenStackDataPlaneNodeSet resources are changed. To start another Ansible execution, you must create another OpenStackDataPlaneDeployment CR.
Create an OpenStackDataPlaneDeployment (CR) that deploys each of your OpenStackDataPlaneNodeSet CRs.
Procedure
Create a file on your workstation named
openstack_data_plane_deploy.yamlto define theOpenStackDataPlaneDeploymentCR:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: data-plane-deploy namespace: openstack-
name- TheOpenStackDataPlaneDeploymentCR name must be unique, must consist of lower case alphanumeric characters,-(hyphen) or.(period), and must start and end with an alphanumeric character. Update the name in this example to a name that reflects the node sets in the deployment.
-
Add all the
OpenStackDataPlaneNodeSetCRs that you want to deploy:spec: nodeSets: - openstack-data-plane - <nodeSet_name> - ... - <nodeSet_name>-
Replace
<nodeSet_name>with the names of theOpenStackDataPlaneNodeSetCRs that you want to include in your data plane deployment.
-
Replace
-
Save the
openstack_data_plane_deploy.yamldeployment file. Deploy the data plane:
$ oc create -f openstack_data_plane_deploy.yaml -n openstackYou can view the Ansible logs while the deployment executes:
$ oc get pod -l app=openstackansibleee -w $ oc logs -l app=openstackansibleee -f --max-log-requests 10If the
oc logscommand returns an error similar to the following error, increase the--max-log-requestsvalue:error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limitVerify that the data plane is deployed:
$ oc get openstackdataplanedeployment -n openstack NAME STATUS MESSAGE data-plane-deploy True Setup Complete $ oc get openstackdataplanenodeset -n openstack NAME STATUS MESSAGE openstack-data-plane True NodeSet ReadyFor information about the meaning of the returned status, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift
If the status indicates that the data plane has not been deployed, then troubleshoot the deployment. For information, see Troubleshooting the data plane creation and deployment in the Deploying Red Hat OpenStack Services on OpenShift guide.
Chapter 6. Managing project networks Copy linkLink copied to clipboard!
Project networks help you to isolate network traffic for cloud computing. Steps to create a project network include planning and creating the network, and adding subnets and routers.
6.1. VLAN planning Copy linkLink copied to clipboard!
When you plan for VLANs in your Red Hat OpenStack Services on OpenShift (RHOSO) environment, you start with a number of subnets, from which you allocate individual IP addresses. When you use multiple subnets you can segregate traffic between systems into VLANs.
For example, it is ideal that your management or API traffic is not on the same network as systems that serve web traffic. Traffic between VLANs travels through a router where you can implement firewalls to govern traffic flow.
You must plan your VLANs as part of your overall plan that includes traffic isolation, high availability, and IP address utilization for the various types of virtual networking resources in your deployment.
6.2. Networks for Red Hat OpenStack Services on OpenShift Copy linkLink copied to clipboard!
Red Hat OpenStack Services on OpenShift (RHOSO) requires the following physical data center networks.
- Control plane network
- Used by the OpenStack Operator for Ansible SSH access to deploy and connect to the data plane nodes from the Red Hat OpenShift Container Platform (RHOCP) environment. This network is also used by data plane nodes for live migration of instances.
- Designate network
- Used internally by the RHOSO DNS service (designate) to manage the DNS servers. For more information, see Designate networks in Configuring DNS as a service.
- Designateext network
- Used to provide external access to the DNS service resolver and the DNS servers.
- External network
An optional network that is used when required for your environment. For example, you might create an external network for any of the following purposes:
- To provide virtual machine instances with Internet access.
- To create flat provider networks that are separate from the control plane.
- To configure VLAN provider networks on a separate bridge from the control plane.
To provide access to virtual machine instances with floating IPs on a network other than the control plane network.
NoteWhen an external network is used for workloads, an OVN gateway is required in some use cases. For more information, see on use cases and available options, see Configuring a control plane OVN gateway with a dedicated NIC in Configuring networking services.
- Internal API network
- Used for internal communication between RHOSO components.
- Octavia network
- Used to connect Load-balancing service (octavia) controllers running in the control plane. For more information, see Octavia network in Configuring load balancing as a service.
- Storage network
- Used for block storage, RBD, NFS, FC, and iSCSI.
- Storage Management network
An optional network that is used by storage components. For example, Red Hat Ceph Storage uses the Storage Management network in a hyperconverged infrastructure (HCI) environment as the
cluster_networkto replicate data.NoteFor more information about Red Hat Ceph Storage network configuration, see "Ceph network configuration" in the Red Hat Ceph Storage Configuration Guide:
- Tenant (project) network
- Used for data communication between virtual machine instances within the cloud deployment.
Figure 6.1. Physical networks for RHOSO
The following table details the default networks used in a RHOSO deployment.
By default, the control plane and external networks do not use VLANs. Networks that do not use VLANs must be placed on separate NICs. You can use a VLAN for the control plane network on new RHOSO deployments. You can also use the Native VLAN on a trunked interface as the non-VLAN network. For example, you can have the control plane and the internal API on one NIC, and the external network with no VLAN on a separate NIC.
| Network name | CIDR | NetConfig allocationRange | MetalLB IPAddressPool range | net-attach-def ipam range | OCP worker nncp range |
|---|---|---|---|---|---|
|
| 192.168.122.0/24 | 192.168.122.100 - 192.168.122.250 | 192.168.122.80 - 192.168.122.90 | 192.168.122.30 - 192.168.122.70 | 192.168.122.10 - 192.168.122.20 |
|
| 172.26.0.0/24 | n/a | n/a | 172.26.0.30 - 172.26.0.70 | 172.26.0.10 - 172.26.0.20 |
|
| 172.34.0.0/24 | n/a | 172.34.0.80 - 172.34.0.120 | 172.34.0.30 - 172.34.0.70 | 172.34.0.10 - 172.34.0.20 |
|
| 10.0.0.0/24 | 10.0.0.100 - 10.0.0.250 | n/a | n/a | n/a |
|
| 172.17.0.0/24 | 172.17.0.100 - 172.17.0.250 | 172.17.0.80 - 172.17.0.90 | 172.17.0.30 - 172.17.0.70 | 172.17.0.10 - 172.17.0.20 |
|
| 172.23.0.0/24 | n/a | n/a | 172.23.0.30 - 172.23.0.70 | n/a |
|
| 172.18.0.0/24 | 172.18.0.100 - 172.18.0.250 | n/a | 172.18.0.30 - 172.18.0.70 | 172.18.0.10 - 172.18.0.20 |
|
| 172.20.0.0/24 | 172.20.0.100 - 172.20.0.250 | n/a | 172.20.0.30 - 172.20.0.70 | 172.20.0.10 - 172.20.0.20 |
|
| 172.19.0.0/24 | 172.19.0.100 - 172.19.0.250 | n/a | 172.19.0.30 - 172.19.0.70 | 172.19.0.10 - 172.19.0.20 |
6.3. IP address consumption Copy linkLink copied to clipboard!
In Red Hat OpenStack Services on OpenShift (RHOSO) environments the following systems consume IP addresses from your allocated range:
- Physical nodes - Each physical NIC requires one IP address. It is common practice to dedicate physical NICs to specific functions. For example, allocate management and NFS traffic to distinct physical NICs, sometimes with multiple NICs connecting across to different switches for redundancy purposes.
- Virtual IPs (VIPs) for High Availability - Plan to allocate between one and three VIPs for each network that controller nodes share.
6.4. Virtual networking Copy linkLink copied to clipboard!
The following virtual resources consume IP addresses in OpenStack Networking in Red Hat OpenStack Services on OpenShift (RHOSO) environments. These resources are considered local to the cloud infrastructure, and do not need to be reachable by systems in the external physical network:
- Project networks - Each project network requires a subnet that it can use to allocate IP addresses to instances.
- Virtual routers - Each router interface plugging into a subnet requires one IP address.
- Instances - Each instance requires an address from the project subnet that hosts the instance. If you require ingress traffic, you must allocate a floating IP address to the instance from the designated external network.
- Management traffic - Includes OpenStack Services and API traffic. All services share a small number of VIPs. API, RPC and database services communicate on the internal API VIP.
6.5. Example network plan Copy linkLink copied to clipboard!
This example shows a number of networks in a Red Hat OpenStack Services on OpenShift (RHOSO) environment that accommodate multiple subnets, with each subnet being assigned a range of IP addresses:
- Example subnet plan
| Subnet name | Address range | Number of addresses | Subnet Mask |
|---|---|---|---|
| Provisioning network | 192.168.100.1 - 192.168.100.250 | 250 | 255.255.255.0 |
| Internal API network | 172.16.1.10 - 172.16.1.250 | 241 | 255.255.255.0 |
| Storage | 172.16.2.10 - 172.16.2.250 | 241 | 255.255.255.0 |
| Storage Management | 172.16.3.10 - 172.16.3.250 | 241 | 255.255.255.0 |
| Tenant network (Geneve/VLAN) | 172.16.4.10 - 172.16.4.250 | 241 | 255.255.255.0 |
| External network (incl. floating IPs) | 10.1.2.10 - 10.1.3.222 | 469 | 255.255.254.0 |
| Provider network (infrastructure) | 10.10.3.10 - 10.10.3.250 | 241 | 255.255.252.0 |
6.6. Working with subnets Copy linkLink copied to clipboard!
In Red Hat OpenStack Services on OpenShift (RHOSO) environments use subnets to grant network connectivity to instances. A subnet is a pool of IP addresses. Instances are assigned to a Networking service (neutron) network. One network can have multiple subnets, and you can also add IP addresses from multiple subnets to the port.
You can create subnets only in pre-existing networks. Remember that project networks in the Networking service can host multiple subnets. This is useful if you intend to host distinctly different systems in the same network, and prefer a measure of isolation between them.
You can lessen network latency and load by grouping systems in the same subnet that require a high volume of traffic between each other.
6.7. Configuring floating IP port forwarding Copy linkLink copied to clipboard!
In Red Hat OpenStack Services on OpenShift (RHOSO) environments, to enable users to set up port forwarding for floating IPs, you must enable the Networking service (neutron) port_forwarding service plug-in.
Prerequisites
-
You have the
occommand line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges. -
The
port_forwardingservice plug-in requires that you also set theovn-routerservice plug-in.
Procedure
Update the control plane:
$ oc patch -n openstack openstackcontrolplane openstack-galera-network-isolation --type=merge --patch " --- spec: neutron: template: customServiceConfig: | [default] service_plugins=ovn-router,port_forwarding "NoteThe
port_forwardingservice plug-in requires that you also set therouterservice plug-in.RHOSO users can now set up port forwarding for floating IPs.
Verification
Access the remote shell for the OpenStackClient pod from your workstation:
$ oc rsh -n openstack openstackclientEnsure that the Networking service has successfully loaded the
port_forwardingandrouterservice plug-ins:$ openstack extension list --network -c Name -c Alias --max-width 74 | \ grep -i -e 'Neutron L3 Router' -i -e floating-ip-port-forwarding \ --os-cloud <cloud_name>Replace <cloud_name> with the name of the cloud on which you are running the command.
- Sample output
A successful verification produces output similar to the following:
| Floating IP Port Forwarding | floating-ip-port-forwarding | | Neutron L3 Router | router |
Exit the
openstackclientpod:$ exit
6.8. Bridging the physical network Copy linkLink copied to clipboard!
In Red Hat OpenStack Services on OpenShift (RHOSO) environments you can bridge your virtual network to the physical network to enable connectivity to and from virtual instances.
In this procedure, the example physical interface, eth0, is mapped to the bridge, br-ex; the virtual bridge acts as the intermediary between the physical network and any virtual networks.
As a result, all traffic traversing eth0 uses the configured Open vSwitch to reach instances.
To map a physical NIC to the virtual Open vSwitch bridge, complete the following steps:
Procedure
Open
/etc/sysconfig/network-scripts/ifcfg-eth0in a text editor, and update the following parameters with values appropriate for the network at your site:- IPADDR
- NETMASK GATEWAY
DNS1 (name server)
Here is an example:
DEVICE=eth0 TYPE=OVSPort DEVICETYPE=ovs OVS_BRIDGE=br-ex ONBOOT=yes
Open
/etc/sysconfig/network-scripts/ifcfg-br-exin a text editor and update the virtual bridge parameters with the IP address values that were previously allocated to eth0:DEVICE=br-ex DEVICETYPE=ovs TYPE=OVSBridge BOOTPROTO=static IPADDR=192.168.120.10 NETMASK=255.255.255.0 GATEWAY=192.168.120.1 DNS1=192.168.120.1 ONBOOT=yesYou can now assign floating IP addresses to instances and make them available to the physical network.
Chapter 7. Using Quality of Service (QoS) policies to manage data traffic Copy linkLink copied to clipboard!
You can offer varying service levels for VM instances by using quality of service (QoS) policies to apply rate limits to egress and ingress traffic in Red Hat OpenStack Services on OpenShift (RHOSO) environments.
You can apply QoS policies to individual ports, or apply QoS policies to a project network, where ports with no specific policy attached inherit the policy.
Internal network owned ports, such as DHCP and internal router ports, are excluded from network policy application.
You can apply, modify, or remove QoS policies dynamically. However, for guaranteed minimum bandwidth QoS policies, you can only apply modifications when there are no instances that use any of the ports the policy is assigned to.
7.1. QoS rules Copy linkLink copied to clipboard!
You can configure the following rule types to define a quality of service (QoS) policy in the Red Hat OpenStack Services on OpenShift (RHOSO) Networking service (neutron):
- Minimum bandwidth (
minimum_bandwidth) - Provides minimum bandwidth constraints on certain types of traffic. If implemented, best efforts are made to provide no less than the specified bandwidth to each port on which the rule is applied.
- Bandwidth limit (
bandwidth_limit) - Provides bandwidth limitations on networks, ports, floating IPs (FIPs), and router gateway IPs. If implemented, any traffic that exceeds the specified rate is dropped.
- DSCP marking (
dscp_marking) - Marks network traffic with a differentiated services code point (DSCP) value.
- Minimum packet rate (
minimum-packet-rate) - Provides minimum rate of packet transmission constraints on certain types of traffic. If implemented, best efforts are made to provide no less than the specified rate of packet transmission to each port on which the rule is applied. Currently, only placement enforcement is supported.
QoS policies can be enforced in various contexts, including virtual machine instance placements, floating IP assignments, and gateway IP assignments.
Depending on the enforcement context and on the mechanism driver you use, a QoS rule affects egress traffic (upload from instance), ingress traffic (download to instance), or both.
In ML2/OVN deployments, you can enable minimum bandwidth and bandwidth limit egress policies for hardware offloaded ports. You cannot enable ingress policies for hardware offloaded ports. For more information, see Section 7.2, “Configuring the Networking service for QoS policies”.
| Rule [1] | Supported traffic direction by mechanism driver | |
| ML2/SR-IOV | ML2/OVN | |
| Minimum bandwidth | Egress only | Egress only [2] |
| Bandwidth limit | Egress only [3] | Egress and ingress |
| DSCP marking | N/A | Egress only [4] |
[1] RHOSO does not support QoS for trunk ports.
[2] In ML2/OVN deployments, minimum bandwidth rules are enforced in the physical device. You cannot configure this enforcement on bond interfaces.
[3] The mechanism drivers ignore the max-burst-kbits parameter because they do not support it.
[4] ML2/OVN does not support DSCP marking on tunneled protocols.
| Enforcement type | Supported traffic by direction mechanism driver | |
| ML2/SR-IOV | ML2/OVN | |
| Placement | Egress and ingress | Technology preview [1] |
[1] See OSPRH-507.
| Enforcement type | Supported traffic direction by mechanism driver |
| ML2/OVN | |
| Floating IP | Egress and ingress |
| Gateway IP | Egress and ingress |
7.2. Configuring the Networking service for QoS policies Copy linkLink copied to clipboard!
The quality of service feature in the Red Hat OpenStack Services on OpenShift (RHOSO) Networking service (neutron) is provided through the qos service plug-in. With the ML2/OVN mechanism driver, qos is loaded by default. However, this is not true for ML2/SR-IOV.
When using the qos service plug-in with the ML2/SR-IOV mechanism driver, you must also load the qos extension on their respective agents.
The following list summarizes the tasks that you must perform to configure the Networking service for QoS. The task details follow this list:
For all types of QoS policies:
-
Add the
qosservice plug-in. -
Add
qosextension for the agents (SR-IOV only).
-
Add the
- In ML2/OVN deployments, you can enable minimum bandwidth and bandwidth limit egress policies for hardware offloaded ports. You cannot enable ingress policies for hardware offloaded ports.
Additional tasks for scheduling VM instances using minimum bandwidth policies only:
- Specify the hypervisor name if it differs from the name that the Compute service (nova) uses.
- Configure the resource provider ingress and egress bandwidths for the relevant agents on each Compute node.
-
(Optional) Mark
vnic_typesas not supported.
Additional task for DSCP marking policies:
-
Enable
edpm_ovn_encap_tos. By default,edpm_ovn_encap_tosis disabled.
-
Enable
Prerequisites
-
You have the
occommand line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges.
Procedure
If you are using the ML2/SR-IOV mechanism driver, you must enable the
qosagent extension on the Compute nodes, also referred to as the RHOSO data plane.For more information, see Configuring the Networking service for QoS policies for SR-IOV.
Add the required QoS configuration. Place the configuration in the
edpm_network_config_templateunderansibleVars:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: my-data-plane-node-set spec: ... nodeTemplate: ... ansible: ansibleVars: edpm_network_config_template: | --- OvnHardwareOffloadedQos: true ...If you want to create DSCP marking policies, add
edpm_ovn_encap_tos: '1'underansibleVars:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: my-data-plane-node-set spec: ... nodeTemplate: ... ansible: ansibleVars: edpm_network_config_template: | --- OvnHardwareOffloadedQos: true edpm_ovn_encap_tos: 1 ...When
edpm_ovn_encap_tosis enabled (has a value of1), the Networking service copies the DSCP value of the inner header to the outer header. The default is0.-
Save the
OpenStackDataPlaneNodeSetCR definition file. Apply the updated
OpenStackDataPlaneNodeSetCR configuration:$ oc apply -f my_data_plane_node_set.yamlVerify that the data plane resource has been updated:
$ oc get openstackdataplanenodeset- Sample output
NAME STATUS MESSAGE my-data-plane-node-set False Deployment not started
Create a file on your workstation to define the
OpenStackDataPlaneDeploymentCR, for example,my_data_plane_deploy.yaml:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: my-data-plane-deployTipGive the definition file and the
OpenStackDataPlaneDeploymentCR a unique and descriptive name that indicates the purpose of the modified node set.Add the
OpenStackDataPlaneNodeSetCR that you modified:spec: nodeSets: - my-data-plane-node-set-
Save the
OpenStackDataPlaneDeploymentCR deployment file. Deploy the modified
OpenStackDataPlaneNodeSetCR:$ oc create -f my_data_plane_deploy.yaml -n openstackYou can view the Ansible logs while the deployment executes:
$ oc get pod -l app=openstackansibleee -n openstack -w $ oc logs -l app=openstackansibleee -n openstack -f \ --max-log-requests 10Verify that the modified
OpenStackDataPlaneNodeSetCR is deployed:- Example
$ oc get openstackdataplanedeployment -n openstack- Sample output
NAME STATUS MESSAGE my-data-plane-node-set True Setup Complete
Repeat the
oc getcommand until you see theNodeSet Readymessage:- Example
$ oc get openstackdataplanenodeset -n openstack- Sample output
NAME STATUS MESSAGE my-data-plane-node-set True NodeSet ReadyFor information on the meaning of the returned status, see Data plane conditions and states in the Deploying Red Hat OpenStack Services on OpenShift guide.
Verification
Confirm that the
qosservice plug-in is loaded:$ openstack network qos policy listIf the
qosservice plug-in is loaded, then you do not receive aResourceNotFounderror.
7.3. Configuring the Networking service for QoS policies for SR-IOV Copy linkLink copied to clipboard!
The quality of service feature in the Red Hat OpenStack Services on OpenShift (RHOSO) Networking service (neutron) is provided through the qos service plug-in. If your Networking service ML2 mechanism driver is SR-IOV, then you must also load the qos extension driver for the NIC switch agent, neutron-sriov-nic-agent, which runs on the Compute nodes, also referred to as the RHOSO data plane.
Prerequisites
-
You have the
occommand line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges.
Procedure
-
Open the
OpenStackDataPlaneNodeSetCR definition file for the node set you want to update, for example,my_data_plane_node_set.yaml. Add the required QoS configuration,
NeutronSriovAgentExtensions: "qos".Place the configuration in the
edpm_network_config_templateunderansibleVars:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: my-data-plane-node-set spec: ... nodeTemplate: ... ansible: ansibleVars: edpm_network_config_template: | --- NeutronSriovAgentExtensions: "qos" ...-
Save the
OpenStackDataPlaneNodeSetCR definition file. Apply the updated
OpenStackDataPlaneNodeSetCR configuration:$ oc apply -f my_data_plane_node_set.yamlVerify that the data plane resource has been updated:
$ oc get openstackdataplanenodeset- Sample output
NAME STATUS MESSAGE my-data-plane-node-set False Deployment not started
Create a file on your workstation to define the
OpenStackDataPlaneDeploymentCR, for example,my_data_plane_deploy.yaml:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: my-data-plane-deployTipGive the definition file and the
OpenStackDataPlaneDeploymentCR a unique and descriptive name that indicates the purpose of the modified node set.Add the
OpenStackDataPlaneNodeSetCR that you modified:spec: nodeSets: - my-data-plane-node-set-
Save the
OpenStackDataPlaneDeploymentCR deployment file. Deploy the modified
OpenStackDataPlaneNodeSetCR:$ oc create -f my_data_plane_deploy.yaml -n openstackYou can view the Ansible logs while the deployment executes:
$ oc get pod -l app=openstackansibleee -n openstack -w $ oc logs -l app=openstackansibleee -n openstack -f \ --max-log-requests 10Verify that the modified
OpenStackDataPlaneNodeSetCR is deployed:- Example
$ oc get openstackdataplanedeployment -n openstack- Sample output
NAME STATUS MESSAGE my-data-plane-node-set True Setup Complete
Repeat the
oc getcommand until you see theNodeSet Readymessage:- Example
$ oc get openstackdataplanenodeset -n openstack- Sample output
NAME STATUS MESSAGE my-data-plane-node-set True NodeSet ReadyFor information on the meaning of the returned status, see Data plane conditions and states in the Deploying Red Hat OpenStack Services on OpenShift guide.
Verification
Confirm that the NIC switch agent, neutron-sriov-nic-agent, has loaded the qos extension.
Obtain the UUID for the NIC switch agent:
$ openstack network agent listWith the
neutron-sriov-nic-agentUUID, run the following command:$ openstack network agent show <uuid>- Example
$ openstack network agent show 8676ccb3-1de0-4ca6-8fb7-b814015d9e5f \ --max-width 70- Sample output
You should see an agent object with a field called
configuration. When theqosextension is loaded, theextensionsfield should containqosin its list.-------------------------------------------------------------------+ | Field | Value | -------------------------------------------------------------------+ | admin_state_up | UP | | agent_type | NIC Switch agent | | alive | :-) | | availability_zone | None | | binary | neutron-sriov-nic-agent | | configuration | {device_mappings: {}, devices: 0, extensi | | | ons: [qos], resource_provider_bandwidths: | | | {}, resource_provider_hypervisors: {}, reso | | | urce_provider_inventory_defaults: {allocatio | | | n_ratio: 1.0, min_unit: 1, step_size: 1, | | | reserved: 0}} | | created_at | 2024-08-08 08:22:57 | | description | None | | ha_state | None | | host | edpm-compute-0.ctlplane.example.com | | id | 8676ccb3-1de0-4ca6-8fb7-b814015d9e5f | | last_heartbeat_at | 2024-08-08 08:24:27 | | resources_synced | None | | started_at | 2024-08-08 08:22:57 | | topic | N/A | -------------------------------------------------------------------+
Chapter 8. Configuring RBAC policies Copy linkLink copied to clipboard!
In Red Hat OpenStack Services on OpenShift (RHOSO) environments, use role-based access control (RBAC) policies in the Networking service (neutron) to control which projects can attach instances to a network and access resources like QoS policies, security groups, address scopes, subnet pools, and address groups.
Networking service RBAC is separate from secure role-based access control (SRBAC) that the Identity service (keystone) uses in RHOSO.
8.1. Creating RBAC policies Copy linkLink copied to clipboard!
This example procedure demonstrates how to use a Networking service (neutron) role-based access control (RBAC) policy to grant a project access to a shared network in a Red Hat OpenStack Services on OpenShift (RHOSO) environment.
Prerequisites
-
The administrator has created a project for you and has provided you with a
clouds.yamlfile for you to access the cloud. The
python-openstackclientpackage resides on your workstation.$ dnf list installed python-openstackclient
Procedure
Confirm that the system
OS_CLOUDvariable is set for your cloud:$ echo $OS_CLOUD my_cloudReset the variable if necessary:
$ export OS_CLOUD=my_other_cloudAs an alternative, you can specify the cloud name by adding the
--os-cloud <cloud_name>option each time you run anopenstackcommand.View the list of available networks:
$ openstack network list+--------------------------------------+-------------+-------------------------------------------------------+ | id | name | subnets | +--------------------------------------+-------------+-------------------------------------------------------+ | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | web-servers | 20512ffe-ad56-4bb4-b064-2cb18fecc923 192.168.200.0/24 | | bcc16b34-e33e-445b-9fde-dd491817a48a | private | 7fe4a05a-4b81-4a59-8c47-82c965b0e050 10.0.0.0/24 | | 9b2f4feb-fee8-43da-bb99-032e4aaf3f85 | public | 2318dc3b-cff0-43fc-9489-7d4cf48aaab9 172.24.4.224/28 | +--------------------------------------+-------------+-------------------------------------------------------+View the list of projects:
$ openstack project list+----------------------------------+----------+ | ID | Name | +----------------------------------+----------+ | 4b0b98f8c6c040f38ba4f7146e8680f5 | auditors | | 519e6344f82e4c079c8e2eabb690023b | services | | 80bf5732752a41128e612fe615c886c6 | demo | | 98a2f53c20ce4d50a40dac4a38016c69 | admin | +----------------------------------+----------+Create a RBAC entry for the
web-serversnetwork that grants access to theauditorsproject (4b0b98f8c6c040f38ba4f7146e8680f5):$ openstack network rbac create --type network --target-project 4b0b98f8c6c040f38ba4f7146e8680f5 --action access_as_shared web-servers- Sample output
+----------------+--------------------------------------+ | Field | Value | +----------------+--------------------------------------+ | action | access_as_shared | | id | 314004d0-2261-4d5e-bda7-0181fcf40709 | | object_id | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | | object_type | network | | target_project | 4b0b98f8c6c040f38ba4f7146e8680f5 | | project_id | 98a2f53c20ce4d50a40dac4a38016c69 | +----------------+--------------------------------------+As a result, users in the auditors project can connect instances to the
web-serversnetwork.
8.2. Reviewing RBAC policies Copy linkLink copied to clipboard!
This example procedure demonstrates how to obtain information about a Networking service (neutron) role-based access control (RBAC) policy used to grant a project access to a shared network in a Red Hat OpenStack Services on OpenShift (RHOSO) environment.
Prerequisites
-
The administrator has created a project for you and has provided you with a
clouds.yamlfile for you to access the cloud. The
python-openstackclientpackage resides on your workstation.$ dnf list installed python-openstackclient
Procedure
Confirm that the system
OS_CLOUDvariable is set for your cloud:$ echo $OS_CLOUD my_cloudReset the variable if necessary:
$ export OS_CLOUD=my_other_cloudAs an alternative, you can specify the cloud name by adding the
--os-cloud <cloud_name>option each time you run anopenstackcommand.Run the
openstack network rbac listcommand to retrieve the ID of your existing role-based access control (RBAC) policies:$ openstack network rbac list- Sample output
+--------------------------------------+-------------+--------------------------------------+ | id | object_type | object_id | +--------------------------------------+-------------+--------------------------------------+ | 314004d0-2261-4d5e-bda7-0181fcf40709 | network | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | | bbab1cf9-edc5-47f9-aee3-a413bd582c0a | network | 9b2f4feb-fee8-43da-bb99-032e4aaf3f85 | +--------------------------------------+-------------+--------------------------------------+
Run the
openstack network rbac-showcommand to view the details of a specific RBAC entry:$ openstack network rbac show 314004d0-2261-4d5e-bda7-0181fcf40709- Sample output
+----------------+--------------------------------------+ | Field | Value | +----------------+--------------------------------------+ | action | access_as_shared | | id | 314004d0-2261-4d5e-bda7-0181fcf40709 | | object_id | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | | object_type | network | | target_project | 4b0b98f8c6c040f38ba4f7146e8680f5 | | project_id | 98a2f53c20ce4d50a40dac4a38016c69 | +----------------+--------------------------------------+
8.3. Deleting RBAC policies Copy linkLink copied to clipboard!
This example procedure demonstrates how to remove a Networking service (neutron) role-based access control (RBAC) policy that grants a project access to a shared network in a Red Hat OpenStack Services on OpenShift (RHOSO) environment.
Prerequisites
-
The administrator has created a project for you and has provided you with a
clouds.yamlfile for you to access the cloud. The
python-openstackclientpackage resides on your workstation.$ dnf list installed python-openstackclient
Procedure
Confirm that the system
OS_CLOUDvariable is set for your cloud:$ echo $OS_CLOUD my_cloudReset the variable if necessary:
$ export OS_CLOUD=my_other_cloudAs an alternative, you can specify the cloud name by adding the
--os-cloud <cloud_name>option each time you run anopenstackcommand.Run the
openstack network rbac listcommand to retrieve the ID of your existing role-based access control (RBAC) policies:# openstack network rbac list +--------------------------------------+-------------+--------------------------------------+ | id | object_type | object_id | +--------------------------------------+-------------+--------------------------------------+ | 314004d0-2261-4d5e-bda7-0181fcf40709 | network | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | | bbab1cf9-edc5-47f9-aee3-a413bd582c0a | network | 9b2f4feb-fee8-43da-bb99-032e4aaf3f85 | +--------------------------------------+-------------+--------------------------------------+Run the
openstack network rbac deletecommand to delete the RBAC, using the ID of the RBAC that you want to delete:# openstack network rbac delete 314004d0-2261-4d5e-bda7-0181fcf40709 Deleted rbac_policy: 314004d0-2261-4d5e-bda7-0181fcf40709
8.4. Granting RBAC policy access for external networks Copy linkLink copied to clipboard!
In a Red Hat OpenStack Services on OpenShift (RHOSO) environment, you can use a Networking service (neutron) role-based access control (RBAC) policy to grant a project access to external networks—networks with gateway interfaces attached.
In the following example, a RBAC policy is created for the web-servers network and access is granted to the engineering project, c717f263785d4679b16a122516247deb:
Prerequisites
-
You have the
occommand line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges.
Procedure
Access the remote shell for the OpenStackClient pod from your workstation:
$ oc rsh -n openstack openstackclientCreate a new RBAC policy using the
--action access_as_externaloption:$ openstack network rbac create --type network --target-project c717f263785d4679b16a122516247deb --action access_as_external web-servers- Sample output
Created a new rbac_policy:
+----------------+--------------------------------------+ | Field | Value | +----------------+--------------------------------------+ | action | access_as_external | | id | ddef112a-c092-4ac1-8914-c714a3d3ba08 | | object_id | 6e437ff0-d20f-4483-b627-c3749399bdca | | object_type | network | | target_project | c717f263785d4679b16a122516247deb | | project_id | c717f263785d4679b16a122516247deb | +----------------+--------------------------------------+As a result, users in the
engineeringproject are able to view the network or connect instances to it:$ openstack network list+--------------------------------------+-------------+------------------------------------------------------+ | id | name | subnets | +--------------------------------------+-------------+------------------------------------------------------+ | 6e437ff0-d20f-4483-b627-c3749399bdca | web-servers | fa273245-1eff-4830-b40c-57eaeac9b904 192.168.10.0/24 | +--------------------------------------+-------------+------------------------------------------------------+
Exit the
openstackclientpod:$ exit
Chapter 9. Common administrative networking tasks Copy linkLink copied to clipboard!
Sometimes you need to perform administration tasks on the Red Hat OpenStack Services on OpenShift (RHOSO) Networking service (neutron) such as specifying the name assigned to ports by the internal DNS.
9.2. Specifying the name that DNS assigns to ports Copy linkLink copied to clipboard!
In Red Hat OpenStack Services on OpenShift (RHOSO) environments, you can specify the name assigned to ports by the internal DNS. You enable this functionality in the Networking service (neutron), by loading the ML2 extension driver, DNS domain for ports, dns_domain_ports.
After loading the driver, you can use the OpenStack Client port commands, port set or port create, with --dns-name to assign a port name.
You must enable the DNS domain for ports extension (dns_domain_ports) for DNS to internally resolve names for ports in your RHOSO environment. Using the NeutronDnsDomain default value, openstacklocal, means that the Networking service does not internally resolve port names for DNS.
Also, when the DNS domain for ports extension is enabled, the Compute service automatically populates the dns_name attribute with the hostname attribute of the instance during the boot of VM instances. At the end of the boot process, dnsmasq recognizes the allocated ports by their instance hostname.
Prerequisites
-
You have the
occommand line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges.
Procedure
Update the control plane with the key value pair,
service_plugins=dns_domain_ports:$ oc patch -n openstack openstackcontrolplane openstack-galera-network-isolation --type=merge --patch " --- spec: neutron: template: customServiceConfig: | [ml2] extension_drivers=dns_domain_ports "NoteIf you set
dns_domain_ports, ensure that the deployment does not also usedns_domain, the DNS Integration extension. These extensions are incompatible, and both extensions cannot be defined simultaneously.RHOSO users can now set up port forwarding for floating IPs.
Verification
Access the remote shell for the OpenStackClient pod from your workstation:
$ oc rsh -n openstack openstackclientConfirm that the Networking service has successfully loaded the
dns_domain_portsML2 extension driver:$ openstack extension list --network --max-width 75 | \ grep dns-domain-ports --os-cloud <cloud_name>Replace <cloud_name> with the name of the cloud on which you are running the command.
- Sample output
A successful verification produces output similar to the following:
| dns_domain for ports | dns-domain-ports | Allows the DNS domain to be specified for a network port.
Create a new port (
new_port) on a network (public). Assign a DNS name (my_port) to the port.- Example
$ openstack port create --network public --dns-name my_port new_port
Display the details for your port (
new_port).- Example
$ openstack port show -c dns_assignment -c dns_domain -c dns_name -c name new_port- Sample output
+-------------------------+----------------------------------------------+ | Field | Value | +-------------------------+----------------------------------------------+ | dns_assignment | fqdn='my_port.example.com', | | | hostname='my_port', | | | ip_address='10.65.176.113' | | dns_domain | example.com | | dns_name | my_port | | name | new_port | +-------------------------+----------------------------------------------+Under
dns_assignment, the fully qualified domain name (fqdn) value for the port contains a concatenation of the DNS name (my_port) and the domain name (example.com) that you set earlier withNeutronDnsDomain.
Create a new VM instance (
my_vm) using the port (new_port) that you just created.- Example
$ openstack server create --image rhel --flavor m1.small --port new_port my_vm
Display the details for your port (
new_port).- Example
$ openstack port show -c dns_assignment -c dns_domain -c dns_name -c name new_port- Sample output
+ ---- +-------------------------+----------------------------------------------+ | Field | Value | +-------------------------+----------------------------------------------+ | dns_assignment | fqdn='my_vm.example.com', | | | hostname='my_vm', | | | ip_address='10.65.176.113' | | dns_domain | example.com | | dns_name | my_vm | | name | new_port | +-------------------------+----------------------------------------------+ ----+ Note that the Compute service changes the
dns_nameattribute from its original value (my_port) to the name of the instance with which the port is associated (my_vm).Exit the
openstackclientpod:$ exit
9.3. Enabling NUMA affinity on ports Copy linkLink copied to clipboard!
In Red Hat OpenStack Services on OpenShift (RHOSO) environments, to enable users to create instances with NUMA affinity on the port, you must load the Networking service (neutron) ML2 extension driver, NUMA port affinity policy, port_numa_affinity_policy.
Prerequisites
-
You have the
occommand line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges.
Procedure
Update the control plane with the key value pair,
extension_drivers=port_numa_affinity_policy:$ oc patch -n openstack openstackcontrolplane openstack-galera-network-isolation --type=merge --patch " --- spec: neutron: template: customServiceConfig: | [ml2] extension_drivers=port_numa_affinity_policy "
Verification
Access the remote shell for the OpenStackClient pod from your workstation:
$ oc rsh -n openstack openstackclientConfirm that the Networking service has successfully loaded the
port_numa_affinity_policyML2 extension driver:$ openstack extension list --network --max-width 74 | \ grep port-numa-affinity-policy --os-cloud <cloud_name>Replace <cloud_name> with the name of the cloud on which you are running the command.
- Sample output
A successful verification produces output similar to the following:
| Port NUMA affinity policy | port-numa-affinity-policy | Expose the port NUMA affinity policy
Create a new port.
When you create a port, use one of the following options to specify the NUMA affinity policy to apply to the port:
-
--numa-policy-required- NUMA affinity policy required to schedule this port. -
--numa-policy-preferred- NUMA affinity policy preferred to schedule this port. --numa-policy-legacy- NUMA affinity policy using legacy mode to schedule this port.- Example
$ openstack port create --network public \ --numa-policy-legacy myNUMAAffinityPort
-
Display the details for your port.
- Example
$ openstack port show myNUMAAffinityPort -c numa_affinity_policy- Sample output
When the extension is loaded, the
Valuecolumn should read,legacy,preferredorrequired. If the extension has failed to load,ValuereadsNone:+----------------------+--------+ | Field | Value | +----------------------+--------+ | numa_affinity_policy | legacy | +----------------------+--------+
Exit the
openstackclientpod:$ exit
9.4. Limiting queries to the metadata service Copy linkLink copied to clipboard!
To protect Red Hat OpenStack Services on OpenShift (RHOSO) environments against cyber threats such as denial of service (DoS) attacks, the Networking service (neutron) offers administrators the ability to limit the rate at which VM instances can query the Compute metadata service. Administrators do this by assigning values to a set of parameters that the Networking service uses to configure HAProxy servers to perform the rate limiting. The HAProxy servers run inside the metadata service.
To add metadata rate limiting for a node set, complete these tasks:
-
Create a
ConfigMapcustom resource (CR) to configure the nodes. - Create a custom service for the feature that runs the playbook for the service.
-
Include the
ConfigMapCR in the custom service.
A detailed procedure follows.
Prerequisites
-
You have the
occommand line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges. Your RHOSP environment uses IPv4 networking.
Currently, the Networking service does not support metadata rate limiting on IPv6 networks.
You have a scheduled maintenance window.
This procedure requires you to restart the OVN metadata service.
Procedure
Create a
ConfigMapCR that defines a new configuration for metadata rate limiting, and save it to a YAML file on your workstation, for example,neutron-metadata-rate-limit.yaml.NoteDo not use the name of the default configuration file, because it would override the infrastructure configuration, such as the
transport_url.Set values for the following rate limiting parameters:
rate_limit_enabled-
enables you to limit the rate of metadata requests. The default value is
false. Set the value totrueto enable metadata rate limiting. ip_versions-
the IP version,
4, used for metadata IP addresses on which you want to control query rates. RHOSP does not yet support metadata rate limiting for IPv6 networks. base_window_duration-
the time span, in seconds, during which query requests are limited. The default value is
10seconds. base_query_rate_limit-
the maximum number of requests allowed during the
base_window_duration. The default value is10requests. burst_window_duration-
the time span, in seconds, that a request rate higher than the
base_window_durationis allowed. The default value is10seconds. burst_query_rate_limit-
the maximum number of requests allowed during the
burst_window_duration. The default value is10requests. - Example
In this example, the Networking service is configured for a base time and rate that allows instances to query the IPv4 metadata service IP address 6 times over a 60 second period. The Networking service is also configured for a burst time and rate that allows a higher rate of 2 queries during shorter periods of 10 seconds each:
apiVersion: v1 kind: ConfigMap metadata: name: neutron-metadata-rate-limit data: 20-neutron-metadata-rate.conf: | [metadata_rate_limiting] rate_limit_enabled = True ip_versions = 4 base_window_duration = 60 base_query_rate_limit = 6 burst_window_duration = 10 burst_query_rate_limit = 2 ...
Create the
ConfigMapobject by using theConfigMapCR file.- Example
$ oc create -f neutron-metadata-rate-limit.yaml -n openstack
Create an
OpenStackDataPlaneServiceCR that defines the metadata rate limit custom service, and save it to a YAML file on your workstation, for exampleneutron-metadata-rate-limit-service.yaml:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: neutron-metadata-rate-limitAdd the
ConfigMapCRs to the custom service, and specify theSecretCR for the cell that the node set that runs this service connects to:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: neutron-metadata-rate-limit spec: dataSources: - configMapRef: name: neutron-metadata-rate-limit - secretRef: name: neutron-ovn-metadata-agent-neutron-config - secretRef: name: nova-metadata-neutron-config - configMapRef: name: neutron-metadata-rate-limit tlsCerts: default: contents: - dnsnames - ips networks: - ctlplane issuer: osp-rootca-issuer-ovn keyUsages: - digital signature - key encipherment - client auth caCerts: combined-ca-bundle containerImageFields: - EdpmNeutronMetadataAgentImageSpecify the Ansible commands to create the custom service, by referencing an Ansible playbook or by including the Ansible play in the
playbookContentsfield:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: neutron-metadata-rate-limit spec: playbook: osp.edpm.neutron_metadata dataSources: - configMapRef: name: neutron-metadata-rate-limit - secretRef: name: neutron-ovn-metadata-agent-neutron-config - secretRef: name: nova-metadata-neutron-config tlsCerts: default: contents: - dnsnames - ips networks: - ctlplane issuer: osp-rootca-issuer-ovn keyUsages: - digital signature - key encipherment - client auth caCerts: combined-ca-bundle containerImageFields: - EdpmNeutronMetadataAgentImageCreate the
metadata-rate-limitservice:$ oc apply -f neutron-metadata-rate-limit -n openstack
Verification
Confirm that the custom service is created:
$ oc get openstackdataplaneservice neutron-metadata-rate-limit -o yaml -n openstack
9.5. Enabling and configuring FDB learning Copy linkLink copied to clipboard!
In Red Hat OpenStack Services on OpenShift (RHOSO) environments, you can use forwarding database (FDB) learning to prevent traffic flooding on ports that have security disabled and belong to a provider network (network has an ML2/OVN localnet port). You can also set the maximum number of FDB entries that can be removed in a single transaction.
Prerequisites
-
You have the
occommand line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges.
Procedure
-
Open your OpenStackControlPlane custom resource (CR) file,
openstack_control_plane.yaml, on your workstation. Add the following configuration to the
neutronservice configuration:spec: neutron: template: customServiceConfig: | [ovn] localnet_learn_fdb = true fdb_age_threshold = 300 [ovn_nb_global] fdb_removal_limit = 50-
localnet_learn_fdb- Enables FDB learning by allowing the localnet ports that are created for each provider network to learn the MAC addresses and store them in the FDB SB table. -
fdb_age_threshold- Sets the maximum time (seconds) that the learned MACs stay in the FDB table, and prevents this table from growing indefinitely. fdb_removal_limit- Limits the number of FDB table entries that can be removed in a single transaction by the aging function.ImportantIf you disable port security on a provider network in an environment, you must set related forwarding database (FDB) learning and aging parameters.
-
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.