Deploying a network functions virtualization environment
Planning, installing, and configuring network functions virtualization (NFV) in Red Hat OpenStack Services on OpenShift
Abstract
Providing feedback on Red Hat documentation Copy linkLink copied to clipboard!
We appreciate your input on our documentation. Tell us how we can make it better.
Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of Red Hat OpenStack Platform (RHOSP). When you create an issue for RHOSO or RHOSP documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback.
To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com.
- Click the following link to open a Create Issue page: Create Issue
- Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form.
- Click Create.
Chapter 1. Understanding Red Hat Network Functions Virtualization (NFV) Copy linkLink copied to clipboard!
Network functions virtualization (NFV) is a software-based solution that helps communication service providers (CSPs) to move beyond the traditional, proprietary hardware to achieve greater efficiency and agility and to reduce operational costs.
Using NFV in a Red Hat OpenStack Services on OpenShift (RHOSO) environment allows for IT and network convergence by providing a virtualized infrastructure that uses the standard virtualization technologies to virtualize network functions (VNFs) that run on hardware devices such as switches, routers, and storage.
1.1. Advantages of NFV Copy linkLink copied to clipboard!
The main advantages of implementing network functions virtualization (NFV) in a Red Hat OpenStack Services on OpenShift (RHOSO) environment are:
- Accelerates the time-to-market by enabling you to quickly deploy and scale new networking services to address changing demands.
- Supports innovation by enabling service developers to self-manage their resources and prototype using the same platform that will be used in production.
- Addresses customer demands in hours or minutes instead of weeks or days, without sacrificing security or performance.
- Reduces capital expenditure because it uses commodity-off-the-shelf hardware instead of expensive tailor-made equipment.
- Uses streamlined operations and automation that optimize day-to-day tasks to improve employee productivity and reduce operational costs.
1.2. Supported Configurations for NFV Deployments Copy linkLink copied to clipboard!
Red Hat supports network functions virtualization (NFV) on Red Hat OpenStack Services on OpenShift (RHOSO) environments using Data Plane Development Kit (DPDK) and Single Root I/O Virtualization (SR-IOV).
Other configurations include:
- Open vSwitch (OVS) with LACP
- Hyper-converged infrastructure (HCI)
1.3. NFV data plane connectivity Copy linkLink copied to clipboard!
With the introduction of network functions virtualization (NFV), more networking vendors are starting to implement their traditional devices as VNFs. While the majority of networking vendors are considering virtual machines, some are also investigating a container-based approach as a design choice. A Red Hat OpenStack Services on OpenShift (RHOSO) environment should be rich and flexible because of two primary reasons:
- Application readiness - Network vendors are currently in the process of transforming their devices into VNFs. Different VNFs in the market have different maturity levels; common barriers to this readiness include enabling RESTful interfaces in their APIs, evolving their data models to become stateless, and providing automated management operations. OpenStack should provide a common platform for all.
Broad use-cases - NFV includes a broad range of applications that serve different use-cases. For example, Virtual Customer Premise Equipment (vCPE) aims at providing a number of network functions such as routing, firewall, virtual private network (VPN), and network address translation (NAT) at customer premises. Virtual Evolved Packet Core (vEPC), is a cloud architecture that provides a cost-effective platform for the core components of Long-Term Evolution (LTE) network, allowing dynamic provisioning of gateways and mobile endpoints to sustain the increased volumes of data traffic from smartphones and other devices.
These use cases are implemented using different network applications and protocols, and require different connectivity, isolation, and performance characteristics from the infrastructure. It is also common to separate between control plane interfaces and protocols and the actual forwarding plane. OpenStack must be flexible enough to offer different datapath connectivity options.
In principle, there are two common approaches for providing data plane connectivity to virtual machines:
- Direct hardware access bypasses the linux kernel and provides secure direct memory access (DMA) to the physical NIC using technologies such as PCI Passthrough or single root I/O virtualization (SR-IOV) for both Virtual Function (VF) and Physical Function (PF) pass-through.
- Using a virtual switch (vswitch), implemented as a software service of the hypervisor. Virtual machines are connected to the vSwitch using virtual interfaces (vNICs), and the vSwitch is capable of forwarding traffic between virtual machines, as well as between virtual machines and the physical network.
Some of the fast data path options are as follows:
- Single Root I/O Virtualization (SR-IOV) is a standard that makes a single PCI hardware device appear as multiple virtual PCI devices. It works by introducing Physical Functions (PFs), which are the fully featured PCIe functions that represent the physical hardware ports, and Virtual Functions (VFs), which are lightweight functions that are assigned to the virtual machines. To the VM, the VF resembles a regular NIC that communicates directly with the hardware. NICs support multiple VFs.
- Open vSwitch (OVS) is an open source software switch that is designed to be used as a virtual switch within a virtualized server environment. OVS supports the capabilities of a regular L2-L3 switch and also offers support to the SDN protocols such as OpenFlow to create user-defined overlay networks (for example, VXLAN). OVS uses Linux kernel networking to switch packets between virtual machines and across hosts using physical NIC. OVS now supports connection tracking (Conntrack) with built-in firewall capability to avoid the overhead of Linux bridges that use iptables/ebtables. Open vSwitch for Red Hat OpenStack Platform environments offers default OpenStack Networking (neutron) integration with OVS.
- Data Plane Development Kit (DPDK) consists of a set of libraries and poll mode drivers (PMD) for fast packet processing. It is designed to run mostly in the user-space, enabling applications to perform their own packet processing directly from or to the NIC. DPDK reduces latency and allows more packets to be processed. DPDK Poll Mode Drivers (PMDs) run in busy loop, constantly scanning the NIC ports on host and vNIC ports in guest for arrival of packets.
- DPDK accelerated Open vSwitch (OVS-DPDK) is Open vSwitch bundled with DPDK for a high performance user-space solution with Linux kernel bypass and direct memory access (DMA) to physical NICs. The idea is to replace the standard OVS kernel data path with a DPDK-based data path, creating a user-space vSwitch on the host that uses DPDK internally for its packet forwarding. The advantage of this architecture is that it is mostly transparent to users. The interfaces it exposes, such as OpenFlow, OVSDB, the command line, remain mostly the same.
1.4. ETSI NFV architecture Copy linkLink copied to clipboard!
The European Telecommunications Standards Institute (ETSI) is an independent standardization group that develops standards for information and communications technologies (ICT) in Europe.
Network functions virtualization (NFV) focuses on addressing problems involved in using proprietary hardware devices. With NFV, the necessity to install network-specific equipment is reduced, depending upon the use case requirements and economic benefits. The ETSI Industry Specification Group for Network Functions Virtualization (ETSI ISG NFV) sets the requirements, reference architecture, and the infrastructure specifications necessary to ensure virtualized functions are supported.
Red Hat is offering an open-source based cloud-optimized solution to help the Communication Service Providers (CSP) to achieve IT and network convergence. Red Hat adds NFV features such as single root I/O virtualization (SR-IOV) and Open vSwitch with Data Plane Development Kit (OVS-DPDK) to Red Hat OpenStack Services on OpenShift (RHOSO) environments.
1.5. NFV ETSI architecture and components Copy linkLink copied to clipboard!
In general, a network functions virtualization (NFV) on Red Hat OpenStack Services on OpenShift (RHOSO) environments has the following components:
Figure 1.1. NFV ETSI architecture and components
- Virtualized Network Functions (VNFs) - the software implementation of routers, firewalls, load balancers, broadband gateways, mobile packet processors, servicing nodes, signalling, location services, and other network functions.
- NFV Infrastructure (NFVi) - the physical resources (compute, storage, network) and the virtualization layer that make up the infrastructure. The network includes the datapath for forwarding packets between virtual machines and across hosts. This allows you to install VNFs without being concerned about the details of the underlying hardware. NFVi forms the foundation of the NFV stack. NFVi supports multi-tenancy and is managed by the Virtual Infrastructure Manager (VIM). Enhanced Platform Awareness (EPA) improves the virtual machine packet forwarding performance (throughput, latency, jitter) by exposing low-level CPU and NIC acceleration components to the VNF.
- NFV Management and Orchestration (MANO) - the management and orchestration layer focuses on all the service management tasks required throughout the life cycle of the VNF. The main goals of MANO is to allow service definition, automation, error-correlation, monitoring, and life-cycle management of the network functions offered by the operator to its customers, decoupled from the physical infrastructure. This decoupling requires additional layers of management, provided by the Virtual Network Function Manager (VNFM). VNFM manages the life cycle of the virtual machines and VNFs by either interacting directly with them or through the Element Management System (EMS) provided by the VNF vendor. The other important component defined by MANO is the Orchestrator, also known as NFVO. NFVO interfaces with various databases and systems including Operations/Business Support Systems (OSS/BSS) on the top and the VNFM on the bottom. If the NFVO wants to create a new service for a customer, it asks the VNFM to trigger the instantiation of a VNF, which may result in multiple virtual machines.
- Operations and Business Support Systems (OSS/BSS) - provides the essential business function applications, for example, operations support and billing. The OSS/BSS needs to be adapted to NFV, integrating with both legacy systems and the new MANO components. The BSS systems set policies based on service subscriptions and manage reporting and billing.
- Systems Administration, Automation and Life-Cycle Management - manages system administration, automation of the infrastructure components and life cycle of the NFVi platform.
1.6. Red Hat NFV components Copy linkLink copied to clipboard!
Red Hat’s solution for network functions virtualization (NFV) includes a range of products that can act as the different components of the NFV framework in the ETSI model. The following products from the Red Hat portfolio integrate into an NFV solution:
- Red Hat OpenStack Services on OpenShift (RHOSO) - Supports IT and NFV workloads. The Enhanced Platform Awareness (EPA) features deliver deterministic performance improvements through CPU pinning, huge pages, Non-Uniform Memory Access (NUMA) affinity, and network adaptors (NICs) that support SR-IOV and OVS-DPDK.
- Red Hat Enterprise Linux and Red Hat Enterprise Linux Atomic Host - Create virtual machines and containers as VNFs.
- Red Hat Ceph Storage - Provides the unified elastic and high-performance storage layer for all the needs of the service provider workloads.
- Red Hat JBoss Middleware and OpenShift Enterprise by Red Hat - Optionally provide the ability to modernize the OSS/BSS components.
- Red Hat CloudForms - Provides a VNF manager and presents data from multiple sources, such as the VIM and the NFVi in a unified display.
- Red Hat Satellite and Ansible by Red Hat - Optionally provide enhanced systems administration, automation and life-cycle management.
Chapter 2. NFV performance considerations Copy linkLink copied to clipboard!
For a network functions virtualization (NFV) solution to be useful, its virtualized functions must meet or exceed the performance of physical implementations. Red Hat’s virtualization technologies are based on the high-performance Kernel-based Virtual Machine (KVM) hypervisor, common in OpenStack and cloud deployments.
In Red Hat OpenStack Services on OpenShift (RHOSO), you configure the Compute nodes to enforce resource partitioning and fine tuning to achieve line rate performance for the guest virtual network functions (VNFs). The key performance factors in the NFV use case are throughput, latency, and jitter.
You can enable high-performance packet switching between physical NICs and virtual machines using data plane development kit (DPDK) accelerated virtual machines. Open vSwitch (OVS) embeds support for Data Plane Development Kit (DPDK) and includes support for vhost-user multiqueue, allowing scalable performance. OVS-DPDK provides line-rate performance for guest VNFs.
Single root I/O virtualization (SR-IOV) networking provides enhanced performance, including improved throughput for specific networks and virtual machines.
Other important features for performance tuning include huge pages, NUMA alignment, host isolation, and CPU pinning. VNF flavors require huge pages and emulator thread isolation for better performance. Host isolation and CPU pinning improve NFV performance and prevent spurious packet loss.
2.1. CPUs and NUMA nodes Copy linkLink copied to clipboard!
Previously, all memory on x86 systems was equally accessible to all CPUs in the system. This resulted in memory access times that were the same regardless of which CPU in the system was performing the operation and was referred to as Uniform Memory Access (UMA).
In Non-Uniform Memory Access (NUMA), system memory is divided into zones called nodes, which are allocated to particular CPUs or sockets. Access to memory that is local to a CPU is faster than memory connected to remote CPUs on that system. Normally, each socket on a NUMA system has a local memory node whose contents can be accessed faster than the memory in the node local to another CPU or the memory on a bus shared by all CPUs.
Similarly, physical NICs are placed in PCI slots on the Compute node hardware. These slots connect to specific CPU sockets that are associated to a particular NUMA node. For optimum performance, connect your datapath NICs to the same NUMA nodes in your CPU configuration (SR-IOV or OVS-DPDK).
The performance impact of NUMA misses are significant, generally starting at a 10% performance hit or higher. Each CPU socket can have multiple CPU cores which are treated as individual CPUs for virtualization purposes.
For more information about NUMA, see What is NUMA and how does it work on Linux?
2.1.1. NUMA node example Copy linkLink copied to clipboard!
The following diagram provides an example of a two-node NUMA system and the way the CPU cores and memory pages are made available:
Figure 2.1. Example: two-node NUMA system
Remote memory available via Interconnect is accessed only if VM1 from NUMA node 0 has a CPU core in NUMA node 1. In this case, the memory of NUMA node 1 acts as local for the third CPU core of VM1 (for example, if VM1 is allocated with CPU 4 in the diagram above), but at the same time, it acts as remote memory for the other CPU cores of the same VM.
2.1.2. NUMA aware instances Copy linkLink copied to clipboard!
You can configure an OpenStack environment to use NUMA topology awareness on systems with a NUMA architecture. When running a guest operating system in a virtual machine (VM) there are two NUMA topologies involved:
- NUMA topology of the physical hardware of the host
- NUMA topology of the virtual hardware exposed to the guest operating system
You can optimize the performance of guest operating systems by aligning the virtual hardware with the physical hardware NUMA topology.
2.2. CPU pinning Copy linkLink copied to clipboard!
CPU pinning is the ability to run a specific virtual machine’s virtual CPU on a specific physical CPU, in a given host. vCPU pinning provides similar advantages to task pinning on bare-metal systems. Since virtual machines run as user space tasks on the host operating system, pinning increases cache efficiency.
2.3. Huge pages Copy linkLink copied to clipboard!
Physical memory is segmented into contiguous regions called pages. For efficiency, the system retrieves memory by accessing entire pages instead of individual bytes of memory. To perform this translation, the system looks in the Translation Lookaside Buffers (TLB) that contain the physical to virtual address mappings for the most recently or frequently used pages. When the system cannot find a mapping in the TLB, the processor must iterate through all of the page tables to determine the address mappings. Optimize the TLB to minimize the performance penalty that occurs during these TLB misses.
The typical page size in an x86 system is 4KB, with other larger page sizes available. Larger page sizes mean that there are fewer pages overall, and therefore increases the amount of system memory that can have its virtual to physical address translation stored in the TLB. Consequently, this reduces TLB misses, which increases performance. With larger page sizes, there is an increased potential for memory to be under-utilized as processes must allocate in pages, but not all of the memory is likely required. As a result, choosing a page size is a compromise between providing faster access times with larger pages, and ensuring maximum memory utilization with smaller pages.
Chapter 3. Requirements for NFV Copy linkLink copied to clipboard!
This section describes the requirements for network functions virtualization (NFV) in a Red Hat OpenStack Services on OpenShift (RHOSO) environment.
Red Hat certifies hardware for use with RHOSO. For more information, see Certified hardware.
3.1. Tested NICs for NFV Copy linkLink copied to clipboard!
For a list of tested NICs for NFV, see the Red Hat Knowledgebase solution Network Adapter Fast Datapath Feature Support Matrix.
Use the default driver for the supported NIC, unless you are configuring NVIDIA (Mellanox) network interfaces. For NVIDIA network interfaces, you must specify the kernel driver during configuration.
- Example
- In this example, an OVS-DPDK port is being configured. Because the NIC being used is an NVIDIA ConnectX-5, the driver must be specified:
3.2. Discovering your NUMA node topology Copy linkLink copied to clipboard!
For network functions virtualization (NFV) on Red Hat OpenStack Services on OpenShift (RHOSO) environments, you must understand the NUMA topology of your Compute node to partition the CPU and memory resources for optimum performance. To determine the NUMA information, perform one of the following tasks:
- Enable hardware introspection to retrieve this information from bare-metal nodes.
- Log on to each bare-metal node to manually collect the information.
3.3. NFV BIOS settings Copy linkLink copied to clipboard!
The following table describes the required BIOS settings for network functions virtualization (NFV) on Red Hat OpenStack Services on OpenShift (RHOSO) environments:
You must enable SR-IOV global and NIC settings in the BIOS, or your RHOSO deployment with SR-IOV Compute nodes will fail.
Parameter | Setting |
---|---|
| Disabled. |
| Disabled. |
| Enabled. |
| Enabled. |
| Enabled. |
| Enabled. |
| Performance. |
| Enabled. |
| Disabled in NFV deployments that require deterministic performance. Enabled in all other scenarios. |
| Enabled for Intel cards if VFIO functionality is needed. |
| Disabled. |
On processors that use the intel_idle
driver, Red Hat Enterprise Linux can ignore BIOS settings and re-enable the processor C-state.
You can disable intel_idle
and instead use the acpi_idle
driver by specifying the key-value pair intel_idle.max_cstate=0
on the kernel boot command line.
Confirm that the processor is using the acpi_idle
driver by checking the contents of current_driver
:
cat /sys/devices/system/cpu/cpuidle/current_driver
$ cat /sys/devices/system/cpu/cpuidle/current_driver
- Sample output
acpi_idle
acpi_idle
You will experience some latency after changing drivers, because it takes time for the Tuned daemon to start. However, after Tuned loads, the processor does not use the deeper C-state.
3.4. Supported drivers for NFV Copy linkLink copied to clipboard!
For a complete list of supported drivers for network functions virtualization (NFV) on Red Hat OpenStack Services on OpenShift (RHOSO) environments, see Component, Plug-In, and Driver Support in Red Hat OpenStack Platform .
For a list of NICs tested for NFV on RHOSO environments, see Tested NICs for NFV.
Chapter 4. Planning an SR-IOV deployment Copy linkLink copied to clipboard!
To optimize single root I/O virtualization (SR-IOV) deployments for NFV in Red Hat OpenStack Services on OpenShift (RHOSO) environments, it is important to understand how SR-IOV uses the Compute node hardware (CPU, NUMA nodes, memory, NICs). This understanding will help you to determine the values required for the parameters used in your SR-IOV configuration.
To evaluate your hardware impact on the SR-IOV parameters, see Discovering your NUMA node topology.
4.1. NIC partitioning for an SR-IOV deployment Copy linkLink copied to clipboard!
You can reduce the number of NICs that you need for each host by configuring single root I/O virtualization (SR-IOV) virtual functions (VFs) for Red Hat OpenStack Services on OpenShift (RHOSO) management networks and provider networks. When you partition a single, high-speed NIC into multiple VFs, you can use the NIC for both control and data plane traffic. This feature has been validated on Intel Fortville NICs, and Mellanox CX-5 NICs.
To partition your NICs, you must adhere to the following requirements:
The NICs, their applications, the VF guest, and OVS reside on the same NUMA Compute node.
Doing so helps to prevent performance degradation from cross-NUMA operations.
Ensure that the NIC firmware updated.
Yum
ordnf
updates might not complete the firmware update. For more information, see your vendor documentation.
4.2. Hardware partitioning for an SR-IOV deployment Copy linkLink copied to clipboard!
To achieve high performance with SR-IOV, partition the resources between the host and the guest.
Figure 4.1. NUMA node topology
A typical topology includes 14 cores per NUMA node on dual socket Compute nodes. Both hyper-threading (HT) and non-HT cores are supported. Each core has two sibling threads. One core is dedicated to the host on each NUMA node. The virtual network function (VNF) handles the SR-IOV interface bonding. All the interrupt requests (IRQs) are routed on the host cores. The VNF cores are dedicated to the VNFs. They provide isolation from other VNFs and isolation from the host. Each VNF must use resources on a single NUMA node. The SR-IOV NICs used by the VNF must also be associated with that same NUMA node. This topology does not have a virtualization overhead. The host, OpenStack Networking (neutron), and Compute (nova) configuration parameters are exposed in a single file for ease, consistency, and to avoid incoherence that is fatal to proper isolation, causing preemption, and packet loss. The host and virtual machine isolation depend on a tuned
profile, which defines the boot parameters and any Red Hat OpenStack Platform modifications based on the list of isolated CPUs.
4.3. Topology of an NFV SR-IOV deployment Copy linkLink copied to clipboard!
The following image has two VNFs each with the management interface represented by mgt
and the data plane interfaces. The management interface manages the ssh
access, and so on. The data plane interfaces bond the VNFs to DPDK to ensure high availability, as VNFs bond the data plane interfaces using the DPDK library. The image also has two provider networks for redundancy. The Compute node has two regular NICs bonded together and shared between the VNF management and the Red Hat OpenStack Platform API management.
Figure 4.2. NFV SR-IOV topology
The image shows a VNF that uses DPDK at an application level, and has access to SR-IOV virtual functions (VFs) and physical functions (PFs), for better availability or performance, depending on the fabric configuration. DPDK improves performance, while the VF/PF DPDK bonds provide support for failover, and high availability. The VNF vendor must ensure that the DPDK poll mode driver (PMD) supports the SR-IOV card that is being exposed as a VF/PF. The management network uses OVS, therefore the VNF sees a mgmt network device using the standard virtIO drivers. You can use that device to initially connect to the VNF, and ensure that the DPDK application bonds the two VF/PFs.
4.4. Topology for NFV SR-IOV without HCI Copy linkLink copied to clipboard!
Observe the topology for SR-IOV without hyper-converged infrastructure (HCI) for NFV in the image below. It consists of compute and controller nodes with 1 Gbps NICs, and the RHOSO worker node.
Figure 4.3. NFV SR-IOV topology without HCI
Chapter 5. Planning an OVS-DPDK deployment Copy linkLink copied to clipboard!
To optimize your Open vSwitch with Data Plane Development Kit (OVS-DPDK) deployment for NFV in Red Hat OpenStack Services on OpenShift (RHOSO) environments, you should understand how OVS-DPDK uses the Compute node hardware (CPU, NUMA nodes, memory, NICs). This understanding will help you to determine the values required for the parameters used in your OVS-DPDK configuration.
When using OVS-DPDK and the OVS native firewall (a stateful firewall based on conntrack), you can track only packets that use ICMPv4, ICMPv6, TCP, and UDP protocols. OVS marks all other types of network traffic as invalid.
5.1. OVS-DPDK with CPU partitioning and NUMA topology Copy linkLink copied to clipboard!
OVS-DPDK partitions the hardware resources for host, guests, and itself. The OVS-DPDK Poll Mode Drivers (PMDs) run DPDK active loops, which require dedicated CPU cores. Therefore you must allocate some CPUs, and huge pages, to OVS-DPDK.
A sample partitioning includes 16 cores per NUMA node on dual-socket Compute nodes. The traffic requires additional NICs because you cannot share NICs between the host and OVS-DPDK.
Figure 5.1. NUMA topology: OVS-DPDK with CPU partitioning
You must reserve DPDK PMD threads on both NUMA nodes, even if a NUMA node does not have an associated DPDK NIC.
For optimum OVS-DPDK performance, reserve a block of memory local to the NUMA node. Choose NICs associated with the same NUMA node that you use for memory and CPU pinning. Ensure that both bonded interfaces are from NICs on the same NUMA node.
5.2. OVS-DPDK with TCP segmentation offload Copy linkLink copied to clipboard!
RHOSO 18.0.10 (Feature Release 3) promotes TCP segmentation offload (TSO) for RHOSO environments with OVS-DPDK from a technology preview to a generally available feature.
Enable TSO for DPDK only in the initial deployment of a new RHOSO environment. Enabling this feature in a previously deployed system is not supported.
The segmentation process happens at the transport layer. It divides data from the upper stack layers into segments to support transport across and within networks at the network and data link layers.
Segmentation processing can happen on the host, where it consumes CPU resources. With TSO, segmentation is offloaded to NICs, to free up host resources and improve performance.
TSO for DPDK can be useful if your workload includes large frames that require TCP segmentation in the user space or kernel.
Additional resources
5.3. Enabling OVS-DPDK with TCP segmentation offload (Technology Preview) Copy linkLink copied to clipboard!
You can configure your Red Hat OpenStack Services on OpenShift (RHOSO) OVS-DPDK environment to offload TCP segmentation to NICs (TSO).
This content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
Enable the tecnology preview of TSO for DPDK only in the initial deployment of a new RHOSO environment. Enabling this technology preview in a previously deployed system is not supported.
Prerequisites
- A functional control plane, created with the OpenStack Operator. For more information, see Creating the control plane.
-
You are logged on to a workstation that has access to the Red Hat OpenShift Container Platform (RHOCP) cluster as a user with
cluster-admin
privileges.
Procedure
When you follow the instructions in Creating a set of data plane nodes with pre-provisioned nodes or Creating a set of data plane nodes with unprovisioned nodes, include the
edpm_ovs_dpdk_enable_tso: true
value pair in theOpenStackDataPlaneNodeSet
manifest.- Example
- Complete the node set creation procedure.
Verification
After completing the node set procedure, run the following command on the Compute nodes:
ovs-vsctl get Open_vSwitch . other_config:userspace-tso-enable
ovs-vsctl get Open_vSwitch . other_config:userspace-tso-enable
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4. OVS-DPDK parameters Copy linkLink copied to clipboard!
This section describes how OVS-DPDK uses parameters to configure the CPU and memory for optimum performance. Use this information to evaluate the hardware support on your Compute nodes and how to partition the hardware to optimize your OVS-DPDK deployment.
This section describes the data plane parameters used in custom resources (CRs) to configure an OVS-DPDK deployment.
Always pair CPU sibling threads, or logical CPUs, together in the physical core when allocating CPU cores.
For details on how to determine the CPU and NUMA nodes on your Compute nodes, see Discovering your NUMA node topology. Use this information to map CPU and other parameters to support the host, guest instance, and OVS-DPDK process needs.
5.4.1. Data plane (EDPM) Ansible variables Copy linkLink copied to clipboard!
The following variables are part of data plane (EDPM) Ansible roles:
- edpm_ovs_dpdk
-
Enables you to add, modify, and delete OVS-DPDK configurations, by using values defined in the OVS-DPDK
edpm
Ansible variables. - edpm_ovs_dpdk_pmd_core_list
- Provides the CPU cores that are used for the DPDK poll mode drivers (PMD). Choose CPU cores that are associated with the local NUMA nodes of the DPDK interfaces.
- edpm_ovs_dpdk_enable_tso
Enables (
true
) or disables (false
) the TCP segmentation offloading (TSO) for DPDK feature. The default isfalse
.WarningThe content for this feature is available in this release as a Documentation Preview, and therefore is not fully verified by Red Hat. Use it only for testing, and do not use in a production environment.
- edpm_tuned_profile
-
Name of the custom TuneD profile. The default value is
throughput-performance
. - edpm_tuned_isolated_cores
- A set of CPU cores isolated from the host processes.
- edpm_ovs_dpdk_socket_memory
Specifies the amount of memory in MB to pre-allocate from the hugepage pool, per NUMA node.
dpm_ovs_dpdk_socket_memory
is theother_config:dpdk-socket-mem
value in OVS:- Provide as a comma-separated list.
- For a NUMA node without a DPDK NIC, use the static value of 1024MB (1GB).
Calculate the edpm_ovs_dpdk_socket_memory value from the MTU value of each NIC on the NUMA node. The following equation approximates the value:
MEMORY_REQD_PER_MTU = (ROUNDUP_PER_MTU + 800) * (4096 * 64) Bytes
MEMORY_REQD_PER_MTU = (ROUNDUP_PER_MTU + 800) * (4096 * 64) Bytes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
800
is the overhead value. -
4096 * 64
is the number of packets in the mempool.
-
-
Add the
MEMORY_REQD_PER_MTU
for each of the MTU values set on the NUMA node and add another512MB
as buffer. Round the value up to a multiple of1024
.
- edpm_ovs_dpdk_memory_channels
Maps memory channels in the CPU per NUMA node. edpm_ovs_dpdk_memory_channels is the other_config:dpdk-extra="-n <value>" value in OVS:
-
Use
dmidecode -t memory
or your hardware manual to determine the number of memory channels available. -
Use
ls /sys/devices/system/node/node* -d
to determine the number of NUMA nodes. - Divide the number of memory channels available by the number of NUMA nodes.
-
Use
- edpm_ovs_dpdk_vhost_postcopy_support
-
Enable or disable OVS-DPDK vhost post-copy support. Setting this to
true
enables post-copy support for all vhost user client ports. - edpm_nova_libvirt_qemu_group
-
Set
edpm_nova_libvirt_qemu_group
tohugetlbfs`so that the `ovs-vswitchd
andqemu
processes can access the shared huge pages and UNIX socket that configures thevirtio-net device
. This value is role-specific and should be applied to any role leveraging OVS-DPDK. - edpm_ovn_bridge_mappings
- List of bridge and dpdk ports mappings.
- edpm_kernel_args
-
Provides multiple kernel arguments to
/etc/default/grub
for the compute nodes at boot time.
5.4.2. Configuration map parameters Copy linkLink copied to clipboard!
The following list describes parameters that you can use in ConfigMap
sections:
- cpu_shared_set
-
List or range of host CPU cores used to determine the host CPUs that instance emulator threads should be offloaded to for instances configured with the share emulator thread policy (
hw::emulator_threads_policy=share
). - cpu_dedicated_set
A comma-separated list or range of physical host CPU numbers to which processes for pinned instance CPUs can be scheduled.
-
Exclude all cores from the
edpm_ovs_dpdk_pmd_core_list
. - Include all remaining cores.
- Pair the sibling threads together.
-
Exclude all cores from the
- reserved_host_memory_mb
-
Reserves memory in MB for tasks on the host. Use the static value of
4096MB
.
5.5. Two NUMA node example OVS-DPDK deployment Copy linkLink copied to clipboard!
The Red Hat OpenStack Services on OpenShift (RHOSO) Compute node in the following example includes two NUMA nodes:
- NUMA 0 has logical cores 0-7 (four physical cores). The sibling thread pairs are (0,1), (2,3), (4,5), and (6,7)
- NUMA 1 has cores 8-15. The sibling thread pairs are (8,9), (10,11), (12,13), and (14,15).
- Each NUMA node connects to a physical NIC, namely NIC1 on NUMA 0, and NIC2 on NUMA 1.
Figure 5.2. OVS-DPDK: two NUMA nodes example
Reserve the first physical cores or both thread pairs on each NUMA node (0,1 and 8,9) for non-datapath DPDK processes.
This example also assumes a 1500 MTU configuration, so the OvsDpdkSocketMemory
is the same for all use cases:
OvsDpdkSocketMemory: "1024,1024"
OvsDpdkSocketMemory: "1024,1024"
- NIC 1 for DPDK, with one physical core for PMD
- In this use case, you allocate one physical core on NUMA 0 for PMD. You must also allocate one physical core on NUMA 1, even though DPDK is not enabled on the NIC for that NUMA node. The remaining cores are allocated for guest instances. The resulting parameter settings are:
edpm_ovs_dpdk_pmd_core_list: "2,3,10,11" cpu_dedicated_set: "4,5,6,7,12,13,14,15"
edpm_ovs_dpdk_pmd_core_list: "2,3,10,11"
cpu_dedicated_set: "4,5,6,7,12,13,14,15"
- NIC 1 for DPDK, with two physical cores for PMD
- In this use case, you allocate two physical cores on NUMA 0 for PMD. You must also allocate one physical core on NUMA 1, even though DPDK is not enabled on the NIC for that NUMA node. The remaining cores are allocated for guest instances. The resulting parameter settings are:
edpm_ovs_dpdk_pmd_core_list: "2,3,4,5,10,11" cpu_dedicated_set: "6,7,12,13,14,15"
edpm_ovs_dpdk_pmd_core_list: "2,3,4,5,10,11"
cpu_dedicated_set: "6,7,12,13,14,15"
- NIC 2 for DPDK, with one physical core for PMD
- In this use case, you allocate one physical core on NUMA 1 for PMD. You must also allocate one physical core on NUMA 0, even though DPDK is not enabled on the NIC for that NUMA node. The remaining cores are allocated for guest instances. The resulting parameter settings are:
edpm_ovs_dpdk_pmd_core_list: "2,3,10,11" cpu_dedicated_set: "4,5,6,7,12,13,14,15"
edpm_ovs_dpdk_pmd_core_list: "2,3,10,11"
cpu_dedicated_set: "4,5,6,7,12,13,14,15"
- NIC 2 for DPDK, with two physical cores for PMD
- In this use case, you allocate two physical cores on NUMA 1 for PMD. You must also allocate one physical core on NUMA 0, even though DPDK is not enabled on the NIC for that NUMA node. The remaining cores are allocated for guest instances. The resulting parameter settings are:
edpm_ovs_dpdk_pmd_core_list: "2,3,10,11,12,13" cpu_dedicated_set: "4,5,6,7,14,15"
edpm_ovs_dpdk_pmd_core_list: "2,3,10,11,12,13"
cpu_dedicated_set: "4,5,6,7,14,15"
- NIC 1 and NIC2 for DPDK, with two physical cores for PMD
- In this use case, you allocate two physical cores on each NUMA node for PMD. The remaining cores are allocated for guest instances. The resulting parameter settings are:
edpm_ovs_dpdk_pmd_core_list: "2,3,4,5,10,11,12,13" cpu_dedicated_set: "6,7,14,15"
edpm_ovs_dpdk_pmd_core_list: "2,3,4,5,10,11,12,13"
cpu_dedicated_set: "6,7,14,15"
5.6. Topology of an NFV OVS-DPDK deployment Copy linkLink copied to clipboard!
This example deployment shows an OVS-DPDK configuration and consists of two virtual network functions (VNFs) with two interfaces each:
-
The management interface, represented by
mgt
. - The data plane interface.
In the OVS-DPDK deployment, the VNFs operate with inbuilt DPDK that supports the physical interface. OVS-DPDK enables bonding at the vSwitch level. For improved performance in your OVS-DPDK deployment, separate kernel and OVS-DPDK NICs. To separate the management (mgt
) network, connected to the Base provider network for the virtual machine, ensure you have additional NICs. The Compute node consists of two regular NICs for the Red Hat OpenStack Platform API management that can be reused by the Ceph API but cannot be shared with any OpenStack project.
Figure 5.3. Compute node: NFV OVS-DPDK
Figure 5.4. OVS-DPDK Topology for NFV
Chapter 6. Installing and preparing the Operators Copy linkLink copied to clipboard!
You install the Red Hat OpenStack Services on OpenShift (RHOSO) OpenStack Operator (openstack-operator
) and create the RHOSO control plane on an operational Red Hat OpenShift Container Platform (RHOCP) cluster. You install the OpenStack Operator by using the RHOCP web console. You perform the control plane installation tasks and all data plane creation tasks on a workstation that has access to the RHOCP cluster.
For information about mapping RHOSO versions to OpenStack Operators and OpenStackVersion Custom Resources (CRs), see the Red Hat knowledge base article at https://access.redhat.com/articles/7125383.
6.1. Prerequisites Copy linkLink copied to clipboard!
An operational RHOCP cluster, version 4.18. For the RHOCP system requirements, see Red Hat OpenShift Container Platform cluster requirements in Planning your deployment.
- For the minimum RHOCP hardware requirements for hosting your RHOSO control plane, see Minimum RHOCP hardware requirements.
- For the minimum RHOCP network requirements, see RHOCP network requirements.
-
For a list of the Operators that must be installed before you install the
openstack-operator
, see RHOCP software requirements.
-
The
oc
command line tool is installed on your workstation. -
You are logged in to the RHOCP cluster as a user with
cluster-admin
privileges.
6.2. Installing the OpenStack Operator Copy linkLink copied to clipboard!
You use OperatorHub on the Red Hat OpenShift Container Platform (RHOCP) web console to install the OpenStack Operator (openstack-operator
) on your RHOCP cluster. After you install the Operator, you configure a single instance of the OpenStack Operator initialization resource to start the OpenStack Operator on your cluster.
Procedure
-
Log in to the RHOCP web console as a user with
cluster-admin
permissions. - Select Operators → OperatorHub.
-
In the Filter by keyword field, type
OpenStack
. -
Click the OpenStack Operator tile with the
Red Hat
source label. - Read the information about the Operator and click Install.
- On the Install Operator page, select "Operator recommended Namespace: openstack-operators" from the Installed Namespace list.
- On the Install Operator page, select "Manual" from the Update approval list. For information about how to manually approve a pending Operator update, see Manually approving a pending Operator update in the RHOCP Operators guide.
-
Click Install to make the Operator available to the
openstack-operators
namespace. The OpenStack Operator is installed when the Status isSucceeded
. - Click Create OpenStack to open the Create OpenStack page.
-
On the Create OpenStack page, click Create to create an instance of the OpenStack Operator initialization resource. The OpenStack Operator is ready to use when the Status of the
openstack
instance isConditions: Ready
.
Chapter 7. Preparing Red Hat OpenShift Container Platform for Red Hat OpenStack Services on OpenShift Copy linkLink copied to clipboard!
You install Red Hat OpenStack Services on OpenShift (RHOSO) on an operational Red Hat OpenShift Container Platform (RHOCP) cluster. To prepare for installing and deploying your RHOSO environment, you must configure the RHOCP worker nodes and the RHOCP networks on your RHOCP cluster.
7.1. Configuring Red Hat OpenShift Container Platform nodes for a Red Hat OpenStack Platform deployment Copy linkLink copied to clipboard!
Red Hat OpenStack Services on OpenShift (RHOSO) services run on Red Hat OpenShift Container Platform (RHOCP) worker nodes. By default, the OpenStack Operator deploys RHOSO services on any worker node. You can use node labels in your OpenStackControlPlane
custom resource (CR) to specify which RHOCP nodes host the RHOSO services. By pinning some services to specific infrastructure nodes rather than running the services on all of your RHOCP worker nodes, you optimize the performance of your deployment. You can create labels for the RHOCP nodes, or you can use the existing labels, and then specify those labels in the OpenStackControlPlane
CR by using the nodeSelector
field.
For example, the Block Storage service (cinder) has different requirements for each of its services:
-
The
cinder-scheduler
service is a very light service with low memory, disk, network, and CPU usage. -
The
cinder-api
service has high network usage due to resource listing requests. -
The
cinder-volume
service has high disk and network usage because many of its operations are in the data path, such as offline volume migration, and creating a volume from an image. -
The
cinder-backup
service has high memory, network, and CPU requirements.
Therefore, you can pin the cinder-api
, cinder-volume
, and cinder-backup
services to dedicated nodes and let the OpenStack Operator place the cinder-scheduler
service on a node that has capacity.
Alternatively, you can create Topology
CRs and use the topologyRef
field in your OpenStackControlPlane
CR to control service pod placement after your RHOCP cluster has been prepared. For more information, see Controlling service pod placement with Topology
CRs.
7.2. Creating a storage class Copy linkLink copied to clipboard!
You must create a storage class for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end to provide persistent volumes to Red Hat OpenStack Services on OpenShift (RHOSO) pods. If you do not have an existing storage class that can provide persistent volumes, you can use the Logical Volume Manager Storage Operator to provide a storage class for RHOSO. You specify this storage class as the cluster storage back end for the RHOSO control plane deployment. Use a storage back end based on SSD or NVMe drives for the storage class. For more information about Logical Volume Manager Storage, see Persistent storage using Logical Volume Manager Storage.
If you are using LVM, you must wait until the LVM Storage Operator announces that the storage is available before creating the control plane. The LVM Storage Operator announces that the cluster and LVMS storage configuration is complete through the annotation for the volume group to the worker node object. If you deploy pods before all the control plane nodes are ready, then multiple PVCs and pods are scheduled on the same nodes.
To check that the storage is ready, you can query the nodes in your lvmclusters.lvm.topolvm.io
object. For example, run the following command if you have three worker nodes and your device class for the LVM Storage Operator is named "local-storage":
oc get node -l "topology.topolvm.io/node in ($(oc get nodes -l node-role.kubernetes.io/worker -o name | cut -d '/' -f 2 | tr '\n' ',' | sed 's/.\{1\}$//'))" -o=jsonpath='{.items[*].metadata.annotations.capacity\.topolvm\.io/local-storage}' | tr ' ' '\n'
# oc get node -l "topology.topolvm.io/node in ($(oc get nodes -l node-role.kubernetes.io/worker -o name | cut -d '/' -f 2 | tr '\n' ',' | sed 's/.\{1\}$//'))" -o=jsonpath='{.items[*].metadata.annotations.capacity\.topolvm\.io/local-storage}' | tr ' ' '\n'
The storage is ready when this command returns three non-zero values.
7.3. Creating the openstack namespace Copy linkLink copied to clipboard!
You must create a namespace within your Red Hat OpenShift Container Platform (RHOCP) environment for the service pods of your Red Hat OpenStack Services on OpenShift (RHOSO) deployment. The service pods of each RHOSO deployment exist in their own namespace within the RHOCP environment.
Prerequisites
-
You are logged on to a workstation that has access to the RHOCP cluster, as a user with
cluster-admin
privileges.
Procedure
Create the
openstack
project for the deployed RHOSO environment:oc new-project openstack
$ oc new-project openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure the
openstack
namespace is labeled to enable privileged pod creation by the OpenStack Operators:Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the security context constraint (SCC) is not "privileged", use the following commands to change it:
oc label ns openstack security.openshift.io/scc.podSecurityLabelSync=false --overwrite oc label ns openstack pod-security.kubernetes.io/enforce=privileged --overwrite
$ oc label ns openstack security.openshift.io/scc.podSecurityLabelSync=false --overwrite $ oc label ns openstack pod-security.kubernetes.io/enforce=privileged --overwrite
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: To remove the need to specify the namespace when executing commands on the
openstack
namespace, set the defaultnamespace
toopenstack
:oc project openstack
$ oc project openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.4. Providing secure access to the Red Hat OpenStack Services on OpenShift services Copy linkLink copied to clipboard!
You must create a Secret
custom resource (CR) to provide secure access to the Red Hat OpenStack Services on OpenShift (RHOSO) service pods. The following procedure creates a Secret
CR with the required password formats for each service.
For an example Secret
CR that generates the required passwords and fernet key for you, see Example Secret
CR for secure access to the RHOSO service pods.
You cannot change a service password once the control plane is deployed. If a service password is changed in osp-secret
after deploying the control plane, the service is reconfigured to use the new password but the password is not updated in the Identity service (keystone). This results in a service outage.
Prerequisites
- You have installed python3-cryptography.
Procedure
-
Create a
Secret
CR on your workstation, for example,openstack_service_secret.yaml
. Add the following initial configuration to
openstack_service_secret.yaml
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<base64_password>
with a 32-character key that is base64 encoded.NoteThe
HeatAuthEncryptionKey
password must be a 32-character key for Orchestration service (heat) encryption. If you increase the length of the passwords for all other services, ensure that theHeatAuthEncryptionKey
password remains at length 32.You can use the following command to manually generate a base64 encoded password:
echo -n <password> | base64
$ echo -n <password> | base64
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, if you are using a Linux workstation and you are generating the
Secret
CR by using a Bash command such ascat
, you can replace<base64_password>
with the following command to auto-generate random passwords for each service:$(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
$(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace the
<base64_fernet_key>
with a base64 encoded fernet key. You can use the following command to manually generate it:$(python3 -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode('UTF-8'))" | base64)
$(python3 -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode('UTF-8'))" | base64)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create the
Secret
CR in the cluster:oc create -f openstack_service_secret.yaml -n openstack
$ oc create -f openstack_service_secret.yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
Secret
CR is created:oc describe secret osp-secret -n openstack
$ oc describe secret osp-secret -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.4.1. Example Secret CR for secure access to the RHOSO service pods Copy linkLink copied to clipboard!
You must create a Secret
custom resource (CR) file to provide secure access to the Red Hat OpenStack Services on OpenShift (RHOSO) service pods.
If you are using a Linux workstation, you can create a Secret
CR file called openstack_service_secret.yaml
by using the following Bash cat
command that generates the required passwords and fernet key for you:
Chapter 8. Preparing networks for RHOSO with NFV Copy linkLink copied to clipboard!
To prepare for configuring and deploying your Red Hat OpenStack Services on OpenShift (RHOSO) on a network functions virtualization (NFV) environment, you must configure the Red Hat OpenShift Container Platform (RHOCP) networks on your RHOCP cluster.
8.1. Default Red Hat OpenStack Services on OpenShift networks Copy linkLink copied to clipboard!
The following physical data center networks are typically implemented for a Red Hat OpenStack Services on OpenShift (RHOSO) deployment:
- Control plane network: used by the OpenStack Operator for Ansible SSH access to deploy and connect to the data plane nodes from the Red Hat OpenShift Container Platform (RHOCP) environment. This network is also used by data plane nodes for live migration of instances.
External network: (optional) used when required for your environment. For example, you might create an external network for any of the following purposes:
- To provide virtual machine instances with Internet access.
- To create flat provider networks that are separate from the control plane.
- To configure VLAN provider networks on a separate bridge from the control plane.
- To provide access to virtual machine instances with floating IPs on a network other than the control plane network.
- Internal API network: used for internal communication between RHOSO components.
- Storage network: used for block storage, RBD, NFS, FC, and iSCSI.
- Tenant (project) network: used for data communication between virtual machine instances within the cloud deployment.
- Octavia controller network: used to connect Load-balancing service (octavia) controllers running in the control plane.
- Designate network: used internally by designate to manage the DNS servers.
- Designateext network: used to provide external access to the DNS service resolver and the DNS servers.
Storage Management network: (optional) used by storage components. For example, Red Hat Ceph Storage uses the Storage Management network in a hyperconverged infrastructure (HCI) environment as the
cluster_network
to replicate data.NoteFor more information on Red Hat Ceph Storage network configuration, see Ceph network configuration in the Red Hat Ceph Storage Configuration Guide.
The following table details the default networks used in a RHOSO deployment. If required, you can update the networks for your environment.
By default, the control plane and external networks do not use VLANs. Networks that do not use VLANs must be placed on separate NICs. You can use a VLAN for the control plane network on new RHOSO deployments. You can also use the Native VLAN on a trunked interface as the non-VLAN network. For example, you can have the control plane and the internal API on one NIC, and the external network with no VLAN on a separate NIC.
Network name | CIDR | NetConfig allocationRange | MetalLB IPAddressPool range | net-attach-def ipam range | OCP worker nncp range |
---|---|---|---|---|---|
| 192.168.122.0/24 | 192.168.122.100 - 192.168.122.250 | 192.168.122.80 - 192.168.122.90 | 192.168.122.30 - 192.168.122.70 | 192.168.122.10 - 192.168.122.20 |
| 10.0.0.0/24 | 10.0.0.100 - 10.0.0.250 | n/a | n/a | n/a |
| 172.17.0.0/24 | 172.17.0.100 - 172.17.0.250 | 172.17.0.80 - 172.17.0.90 | 172.17.0.30 - 172.17.0.70 | 172.17.0.10 - 172.17.0.20 |
| 172.18.0.0/24 | 172.18.0.100 - 172.18.0.250 | n/a | 172.18.0.30 - 172.18.0.70 | 172.18.0.10 - 172.18.0.20 |
| 172.19.0.0/24 | 172.19.0.100 - 172.19.0.250 | n/a | 172.19.0.30 - 172.19.0.70 | 172.19.0.10 - 172.19.0.20 |
| 172.23.0.0/24 | n/a | n/a | 172.23.0.30 - 172.23.0.70 | n/a |
| 172.26.0.0/24 | n/a | n/a | 172.26.0.30 - 172.26.0.70 | 172.26.0.10 - 172.26.0.20 |
| 172.34.0.0/24 | n/a | 172.34.0.80 - 172.34.0.120 | 172.34.0.30 - 172.34.0.70 | 172.34.0.10 - 172.34.0.20 |
| 172.20.0.0/24 | 172.20.0.100 - 172.20.0.250 | n/a | 172.20.0.30 - 172.20.0.70 | 172.20.0.10 - 172.20.0.20 |
The following table specifies the networks that establish connectivity to the fabric using eth2
and eth3
with different IP addresses per zone and rack and also a global bgpmainnet
that is used as a source for the traffic:
Network name | Zone 0 | Zone 1 | Zone 2 |
---|---|---|---|
BGP Net1 ( | 100.64.0.0/24 | 100.64.1.0 | 100.64.2.0 |
BGP Net2 ( | 100.65.0.0/24 | 100.65.1.0/24 | 100.65.2.0 |
Bgpmainnet (loopback) | 99.99.0.0/24 | 99.99.1.0/24 | 99.99.2.0/24 |
8.2. NIC configurations for NFV Copy linkLink copied to clipboard!
The Red Hat OpenStack Services on OpenShift (RHOSO) nodes that host the data plane require one of the following NIC configurations:
- Single NIC configuration - One NIC for the provisioning network on the native VLAN and tagged VLANs that use subnets for the different data plane network types.
- Dual NIC configuration - One NIC for the provisioning network and the other NIC for the external network.
- Dual NIC configuration - One NIC for the provisioning network on the native VLAN, and the other NIC for tagged VLANs that use subnets for different data plane network types.
- Multiple NIC configuration - Each NIC uses a subnet for a different data plane network type.
8.3. Preparing RHOCP for RHOSO networks Copy linkLink copied to clipboard!
The Red Hat OpenStack Services on OpenShift (RHOSO) services run as a Red Hat OpenShift Container Platform (RHOCP) workload. You use the NMState Operator to connect the worker nodes to the required isolated networks. You use the MetalLB Operator to expose internal service endpoints on the isolated networks. The public service endpoints are exposed as RHOCP routes by default, because only routes are supported for public endpoints.
The control plane interface name must be consistent across all nodes because network manifests reference the control plane interface name directly. If the control plane interface names are inconsistent, then the RHOSO environment fails to deploy. If the physical interface names are inconsistent on the nodes, you must create a Linux bond that configures a consistent alternative name for the physical interfaces that can be referenced by the other network manifests.
The examples in the following procedures use IPv4 addresses. You can use IPv6 addresses instead of IPv4 addresses. Dual stack IPv4/6 is not available. For information about how to configure IPv6 addresses, see the following resources in the RHOCP Networking guide:
8.3.1. Preparing RHOCP with isolated network interfaces Copy linkLink copied to clipboard!
Create a NodeNetworkConfigurationPolicy
(nncp
) CR to configure the interfaces for each isolated network on each worker node in RHOCP cluster.
Procedure
-
Create a
NodeNetworkConfigurationPolicy
(nncp
) CR file on your workstation, for example,openstack-nncp.yaml
. Retrieve the names of the worker nodes in the RHOCP cluster:
oc get nodes -l node-role.kubernetes.io/worker -o jsonpath="{.items[*].metadata.name}"
$ oc get nodes -l node-role.kubernetes.io/worker -o jsonpath="{.items[*].metadata.name}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Discover the network configuration:
oc get nns/<worker_node> -o yaml | more
$ oc get nns/<worker_node> -o yaml | more
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<worker_node>
with the name of a worker node retrieved in step 2, for example,worker-1
. Repeat this step for each worker node.
-
Replace
In the
nncp
CR file, configure the interfaces for each isolated network on each worker node in the RHOCP cluster. For information about the default physical data center networks that must be configured with network isolation, see Default Red Hat OpenStack Services on OpenShift networks.In the following example, the
nncp
CR configures theenp6s0
interface for worker node 1,osp-enp6s0-worker-1
, to use VLAN interfaces with IPv4 addresses for network isolation:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
nncp
CR in the cluster:oc apply -f openstack-nncp.yaml
$ oc apply -f openstack-nncp.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
nncp
CR is created:oc get nncp -w
$ oc get nncp -w NAME STATUS REASON osp-enp6s0-worker-1 Progressing ConfigurationProgressing osp-enp6s0-worker-1 Progressing ConfigurationProgressing osp-enp6s0-worker-1 Available SuccessfullyConfigured
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.3.2. Attaching service pods to the isolated networks Copy linkLink copied to clipboard!
Create a NetworkAttachmentDefinition
(net-attach-def
) custom resource (CR) for each isolated network to attach the service pods to the networks.
Procedure
-
Create a
NetworkAttachmentDefinition
(net-attach-def
) CR file on your workstation, for example,openstack-net-attach-def.yaml
. In the
NetworkAttachmentDefinition
CR file, configure aNetworkAttachmentDefinition
resource for each isolated network to attach a service deployment pod to the network. The following examples create aNetworkAttachmentDefinition
resource for the following networks:-
internalapi
,storage
,ctlplane
, andtenant
networks of typemacvlan
. -
octavia
, the load-balancing management network, of typebridge
. This network attachment connects pods that manage load balancer virtual machines (amphorae) and the Open vSwitch pods that are managed by the OVN operator. -
designate
network used internally by the DNS service (designate) to manage the DNS servers. -
designateext
network used to provide external access to the DNS service resolver and the DNS servers.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
metadata.namespace
: The namespace where the services are deployed. -
"master"
: The node interface name associated with the network, as defined in thenncp
CR. -
"ipam"
: Thewhereabouts
CNI IPAM plug-in assigns IPs to the created pods from the range.30 - .70
. -
"range_start" - "range_end"
: The IP address pool range must not overlap with the MetalLBIPAddressPool
range and theNetConfig allocationRange
.
-
Create the
NetworkAttachmentDefinition
CR in the cluster:oc apply -f openstack-net-attach-def.yaml
$ oc apply -f openstack-net-attach-def.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
NetworkAttachmentDefinition
CR is created:oc get net-attach-def -n openstack
$ oc get net-attach-def -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.3.3. Preparing RHOCP for RHOSO network VIPS Copy linkLink copied to clipboard!
The Red Hat OpenStack Services on OpenShift (RHOSO) services run as a Red Hat OpenShift Container Platform (RHOCP) workload. You must create an L2Advertisement
resource to define how the Virtual IPs (VIPs) are announced, and an IPAddressPool
resource to configure which IPs can be used as VIPs. In layer 2 mode, one node assumes the responsibility of advertising a service to the local network.
Procedure
-
Create an
IPAddressPool
CR file on your workstation, for example,openstack-ipaddresspools.yaml
. In the
IPAddressPool
CR file, configure anIPAddressPool
resource on the isolated network to specify the IP address ranges over which MetalLB has authority:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
spec.addresses
: TheIPAddressPool
range must not overlap with thewhereabouts
IPAM range and the NetConfigallocationRange
.
For information about how to configure the other
IPAddressPool
resource parameters, see Configuring MetalLB address pools in the RHOCP Networking guide.-
Create the
IPAddressPool
CR in the cluster:oc apply -f openstack-ipaddresspools.yaml
$ oc apply -f openstack-ipaddresspools.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
IPAddressPool
CR is created:oc describe -n metallb-system IPAddressPool
$ oc describe -n metallb-system IPAddressPool
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create a
L2Advertisement
CR file on your workstation, for example,openstack-l2advertisement.yaml
. In the
L2Advertisement
CR file, configureL2Advertisement
CRs to define which node advertises a service to the local network. Create oneL2Advertisement
resource for each network.In the following example, each
L2Advertisement
CR specifies that the VIPs requested from the network address pools are announced on the interface that is attached to the VLAN:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
spec.interfaces
: The interface where the VIPs requested from the VLAN address pool are announced.
For information about how to configure the other
L2Advertisement
resource parameters, see Configuring MetalLB with a L2 advertisement and label in the RHOCP Networking guide.-
Create the
L2Advertisement
CRs in the cluster:oc apply -f openstack-l2advertisement.yaml
$ oc apply -f openstack-l2advertisement.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
L2Advertisement
CRs are created:Copy to Clipboard Copied! Toggle word wrap Toggle overflow If your cluster has OVNKubernetes as the network back end, then you must enable global forwarding so that MetalLB can work on a secondary network interface.
Check the network back end used by your cluster:
oc get network.operator cluster --output=jsonpath='{.spec.defaultNetwork.type}'
$ oc get network.operator cluster --output=jsonpath='{.spec.defaultNetwork.type}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the back end is OVNKubernetes, then run the following command to enable global IP forwarding:
oc patch network.operator cluster -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"ipForwarding": "Global"}}}}}' --type=merge
$ oc patch network.operator cluster -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"ipForwarding": "Global"}}}}}' --type=merge
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.4. Creating the data plane network Copy linkLink copied to clipboard!
To create the data plane network, you define a NetConfig
custom resource (CR) and specify all the subnets for the data plane networks. You must define at least one control plane network for your data plane. You can also define VLAN networks to create network isolation for composable networks, such as InternalAPI
, Storage
, and External
. Each network definition must include the IP address assignment.
Use the following commands to view the NetConfig
CRD definition and specification schema:
oc describe crd netconfig oc explain netconfig.spec
$ oc describe crd netconfig
$ oc explain netconfig.spec
Procedure
-
Create a file named
openstack_netconfig.yaml
on your workstation. Add the following configuration to
openstack_netconfig.yaml
to create theNetConfig
CR:apiVersion: network.openstack.org/v1beta1 kind: NetConfig metadata: name: openstacknetconfig namespace: openstack
apiVersion: network.openstack.org/v1beta1 kind: NetConfig metadata: name: openstacknetconfig namespace: openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the
openstack_netconfig.yaml
file, define the topology for each data plane network. To use the default Red Hat OpenStack Services on OpenShift (RHOSO) networks, you must define a specification for each network. For information about the default RHOSO networks, see Default Red Hat OpenStack Services on OpenShift networks. The following example creates isolated networks for the data plane:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
spec.networks.name
: The name of the network, for example,CtlPlane
. -
spec.networks.subnets
: The IPv4 subnet specification. -
spec.networks.subnets.name
: The name of the subnet, for example,subnet1
. -
spec.networks.subnets.allocationRanges
: TheNetConfig
allocationRange
. TheallocationRange
must not overlap with the MetalLBIPAddressPool
range and the IP address pool range. -
spec.networks.subnets.excludeAddresses
: Optional: List of IP addresses from the allocation range that must not be used by data plane nodes. -
spec.networks.subnets.vlan
: The network VLAN. For information about the default RHOSO networks, see Default Red Hat OpenStack Services on OpenShift networks.
-
-
Save the
openstack_netconfig.yaml
definition file. Create the data plane network:
oc create -f openstack_netconfig.yaml -n openstack
$ oc create -f openstack_netconfig.yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify that the data plane network is created, view the
openstacknetconfig
resource:oc get netconfig/openstacknetconfig -n openstack
$ oc get netconfig/openstacknetconfig -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you see errors, check the underlying
network-attach-definition
and node network configuration policies:oc get network-attachment-definitions -n openstack oc get nncp
$ oc get network-attachment-definitions -n openstack $ oc get nncp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 9. Creating the control plane for NFV environments Copy linkLink copied to clipboard!
The Red Hat OpenStack Services on OpenShift (RHOSO) control plane contains the RHOSO services that manage the cloud. These control plane services are services that provide APIs and do not run Compute node workloads. The RHOSO control plane services run as a Red Hat OpenShift Container Platform (RHOCP) workload, and you deploy these services using Operators in OpenShift. When you configure these OpenStack control plane services, you use one custom resource (CR) definition called OpenStackControlPlane
.
Creating the control plane also creates an OpenStackClient
pod that you can access through a remote shell (rsh
) to run RHOSO CLI commands.
oc rsh -n openstack openstackclient
$ oc rsh -n openstack openstackclient
9.1. Prerequisites Copy linkLink copied to clipboard!
- The RHOCP cluster is prepared for RHOSO network isolation. For more information, see Preparing RHOCP for RHOSO networks.
-
The OpenStack Operator (
openstack-operator
) is installed. For more information, see Installing and preparing the Operators. The RHOCP cluster is not configured with any network policies that prevent communication between the
openstack-operators
namespace and the control plane namespace (defaultopenstack
). Use the following command to check the existing network policies on the cluster:oc get networkpolicy -n openstack
$ oc get networkpolicy -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
You are logged on to a workstation that has access to the RHOCP cluster, as a user with
cluster-admin
privileges.
9.2. Creating the control plane Copy linkLink copied to clipboard!
Define an OpenStackControlPlane
custom resource (CR) to perform the following tasks:
- Create the control plane.
- Enable the Red Hat OpenStack Services on OpenShift (RHOSO) services.
The following procedure creates an initial control plane with example configurations for each service. The procedure helps you create an operational control plane environment. You can use the environment to test and troubleshoot issues before additional required service customization. Services can be added and customized after the initial deployment.
To configure a service, you use the CustomServiceConfig
field in a service specification to pass OpenStack configuration parameters in INI file format. For more information about the available configuration parameters, see Configuration reference.
For more information on how to customize your control plane after deployment, see the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.
For more information, see Example OpenStackControlPlane
CR.
Use the following commands to view the OpenStackControlPlane
CRD definition and specification schema:
oc describe crd openstackcontrolplane oc explain openstackcontrolplane.spec
$ oc describe crd openstackcontrolplane
$ oc explain openstackcontrolplane.spec
For NFV environments, when you add the Networking service (neutron) and OVN service configurations, you must supply the following information:
- Physical networks where your gateways are located.
- Path to vhost sockets.
- VLAN ranges.
- Number of NUMA nodes.
- NICs that connect to the gateway networks.
If you are using SR-IOV, you must also add the sriovnicswitch
mechanism driver to the Networking service configuration.
Procedure
Create the
openstack
project for the deployed RHOSO environment:oc new-project openstack
$ oc new-project openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure the
openstack
namespace is labeled to enable privileged pod creation by the OpenStack Operators:Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the security context constraint (SCC) is not "privileged", use the following commands to change it:
oc label ns openstack security.openshift.io/scc.podSecurityLabelSync=false --overwrite oc label ns openstack pod-security.kubernetes.io/enforce=privileged --overwrite
$ oc label ns openstack security.openshift.io/scc.podSecurityLabelSync=false --overwrite $ oc label ns openstack pod-security.kubernetes.io/enforce=privileged --overwrite
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a file on your workstation named
openstack_control_plane.yaml
to define theOpenStackControlPlane
CR:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstack
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the
Secret
CR you created to provide secure access to the RHOSO service pods in Providing secure access to the Red Hat OpenStack Services on OpenShift services:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the
storageClass
you created for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFor information about storage classes, see Creating a storage class.
Add the following service configurations:
Block Storage service (cinder):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThis definition for the Block Storage service is only a sample. You might need to modify it for your NFV environment. For more information, see Planning storage and shared file systems in Planning your deployment.
NoteFor the initial control plane deployment, the
cinderBackup
andcinderVolumes
services are deployed but not activated (replicas: 0). You can configure your control plane post-deployment with a back end for the Block Storage service and the backup service.Compute service (nova):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteA full set of Compute services (nova) are deployed by default for each of the default cells,
cell0
andcell1
:nova-api
,nova-metadata
,nova-scheduler
, andnova-conductor
. Thenovncproxy
service is also enabled forcell1
by default.DNS service for the data plane:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Defines the dnsmasq instances required for each DNS server by using key-value pairs. In this example, there are two key-value pairs defined because there are two DNS servers configured to forward requests to.
- 2
- Specifies the dnsmasq parameter to customize for the deployed dnsmasq instance. Set to one of the following valid values:
-
server
-
rev-server
-
srv-host
-
txt-record
-
ptr-record
-
rebind-domain-ok
-
naptr-record
-
cname
-
host-record
-
caa-record
-
dns-rr
-
auth-zone
-
synth-domain
-
no-negcache
-
local
-
- 3
- Specifies the values for the dnsmasq parameter. You can specify a generic DNS server as the value, for example,
1.1.1.1
, or a DNS server for a specific domain, for example,/google.com/8.8.8.8
.
A Galera cluster for use by all RHOSO services (
openstack
), and a Galera cluster for use by the Compute service forcell1
(openstack-cell1
):Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identity service (keystone)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Image service (glance):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFor the initial control plane deployment, the Image service is deployed but not activated (replicas: 0). You can configure your control plane post-deployment with a back end for the Image service.
Key Management service (barbican):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Memcached:
memcached: templates: memcached: replicas: 3
memcached: templates: memcached: replicas: 3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Networking service (neutron):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
If you are using SR-IOV, you must also add the
sriovnicswitch
mechanism driver, for example,mechanism_drivers = ovn,sriovnicswitch
. -
Replace
<path>
with the absolute path to thevhost
sockets, for example,/var/lib/vhost
. -
Replace
<network_name1>
and<network_name2>
with the names of the physical networks that your gateways are on. (This network is set in the neutron networkprovider:*name
field.) -
Replace
<VLAN-ID1>
and`<VLAN-ID2>` with the VLAN IDs you are using.
-
If you are using SR-IOV, you must also add the
Object Storage service (swift):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow OVN:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<network_name>
with the name of the physical network your gateway is on. (This network is set in the neutron networkprovider:*name
field.) -
Replace
<nic_name>
with the name of the NIC connecting to the gateway network. -
Optional: Add additional
<network_name>:<nic_name>
pairs undernicMappings
as required.
-
Replace
Placement service (placement):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow RabbitMQ:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Telemetry service (ceilometer, prometheus):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- You must have the
autoscaling
field present, even if autoscaling is disabled.
Create the control plane:
oc create -f openstack_control_plane.yaml -n openstack
$ oc create -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteCreating the control plane also creates an
OpenStackClient
pod that you can access through a remote shell (rsh
) to run RHOSO CLI commands.$ oc rsh -n openstack openstackclient
$ oc rsh -n openstack openstackclient
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Check the status of the control plane deployment:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Sample output
NAME STATUS MESSAGE openstack-control-plane Unknown Setup started
NAME STATUS MESSAGE openstack-control-plane Unknown Setup started
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.NoteCreating the control plane also creates an
OpenStackClient
pod that you can access through a remote shell (rsh
) to run RHOSO CLI commands.oc rsh -n openstack openstackclient
$ oc rsh -n openstack openstackclient
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Optional: Confirm that the control plane is deployed by reviewing the pods in the
openstack
namespace:oc get pods -n openstack
$ oc get pods -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The control plane is deployed when all the pods are either completed or running.
Verification
Open a remote shell connection to the
OpenStackClient
pod:oc rsh -n openstack openstackclient
$ oc rsh -n openstack openstackclient
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the internal service endpoints are registered with each service:
openstack endpoint list -c 'Service Name' -c Interface -c URL --service glance
$ openstack endpoint list -c 'Service Name' -c Interface -c URL --service glance
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Sample output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Exit the
OpenStackClient
pod:exit
$ exit
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.3. Example OpenStackControlPlane CR Copy linkLink copied to clipboard!
The following example OpenStackControlPlane
CR is a complete control plane configuration that includes all the key services that must always be enabled for a successful deployment.
-
spec.storageClass
: The storage class that you created for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end. -
spec.cinder
: Service-specific parameters for the Block Storage service (cinder). -
spec.cinder.template.cinderBackup
: The Block Storage service back end. For more information on configuring storage services, see the Configuring persistent storage guide. -
spec.cinder.template.cinderVolumes
: The Block Storage service configuration. For more information on configuring storage services, see the Configuring persistent storage guide. spec.cinder.template.cinderVolumes.networkAttachments
: The list of networks that each service pod is directly attached to, specified by using theNetworkAttachmentDefinition
resource names. A NIC is configured for the service for each specified network attachment.NoteIf you do not configure the isolated networks that each service pod is attached to, then the default pod network is used. For example, the Block Storage service uses the storage network to connect to a storage back end; the Identity service (keystone) uses an LDAP or Active Directory (AD) network; the
ovnDBCluster
service uses theinternalapi
network; and theovnController
service uses thetenant
network.-
spec.nova
: Service-specific parameters for the Compute service (nova). -
spec.nova.apiOverride
: Service API route definition. You can customize the service route by using route-specific annotations. For more information, see Route-specific annotations in the RHOCP Networking guide. Setroute:
to{}
to apply the default route template. -
metallb.universe.tf/address-pool
: The internal service API endpoint registered as a MetalLB service with theIPAddressPool internalapi
. -
metallb.universe.tf/loadBalancerIPs
: The virtual IP (VIP) address for the service. The IP is shared with other services by default. spec.rabbitmq
: The RabbitMQ instances exposed to an isolated network with distinct IP addresses defined in theloadBalancerIPs
annotation, as indicated in 11 and 12.NoteYou cannot configure multiple RabbitMQ instances on the same virtual IP (VIP) address because all RabbitMQ instances use the same port. If you need to expose multiple RabbitMQ instances to the same network, then you must use distinct IP addresses.
-
rabbitmq.override.service.metadata.annotations.metallb.universe.tf/loadBalancerIPs
: The distinct IP address for a RabbitMQ instance that is exposed to an isolated network.
9.4. Removing a service from the control plane Copy linkLink copied to clipboard!
You can completely remove a service and the service database from the control plane after deployment by disabling the service. Many services are enabled by default, which means that the OpenStack Operator creates resources such as the service database and Identity service (keystone) users, even if no service pod is created because replicas
is set to 0
.
Remove a service with caution. Removing a service is not the same as stopping service pods. Removing a service is irreversible. Disabling a service removes the service database and any resources that referenced the service are no longer tracked. Create a backup of the service database before removing a service.
Procedure
-
Open the
OpenStackControlPlane
CR file on your workstation. Locate the service you want to remove from the control plane and disable it:
cinder: enabled: false apiOverride: route: {} ...
cinder: enabled: false apiOverride: route: {} ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until RHOCP removes the resource related to the disabled service. Run the following command to check the status:
oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup started
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
OpenStackControlPlane
resource is updated with the disabled service when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.Optional: Confirm that the pods from the disabled service are no longer listed by reviewing the pods in the
openstack
namespace:oc get pods -n openstack
$ oc get pods -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the service is removed:
oc get cinder -n openstack
$ oc get cinder -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command returns the following message when the service is successfully removed:
No resources found in openstack namespace.
No resources found in openstack namespace.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the API endpoints for the service are removed from the Identity service (keystone):
oc rsh -n openstack openstackclient openstack endpoint list --service volumev3
$ oc rsh -n openstack openstackclient $ openstack endpoint list --service volumev3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command returns the following message when the API endpoints for the service are successfully removed:
No service with a type, name or ID of 'volumev3' exists.
No service with a type, name or ID of 'volumev3' exists.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.5. Additional resources Copy linkLink copied to clipboard!
- Kubernetes NMState Operator
- The Kubernetes NMState project
- Load balancing with MetalLB
- MetalLB documentation
- MetalLB in layer 2 mode
- Specify network interfaces that LB IP can be announced from
- Multiple networks
- Using the Multus CNI in OpenShift
- macvlan plugin
- whereabouts IPAM CNI plugin - Extended configuration
- About advertising for the IP address pools
- Dynamic provisioning
- Configuring the Block Storage backup service
- Configuring the Image service (glance)
Chapter 10. Creating the data plane for SR-IOV and DPDK environments Copy linkLink copied to clipboard!
The Red Hat OpenStack Services on OpenShift (RHOSO) data plane consists of RHEL 9.4 nodes. Use the OpenStackDataPlaneNodeSet
custom resource definition (CRD) to create the custom resources (CRs) that define the nodes and the layout of the data plane. After you have defined your OpenStackDataPlaneNodeSet
CRs, you create an OpenStackDataPlaneDeployment
CR that deploys each of your OpenStackDataPlaneNodeSet
CRs.
An OpenStackDataPlaneNodeSet
CR is a logical grouping of nodes of a similar type. A data plane typically consists of multiple OpenStackDataPlaneNodeSet
CRs to define groups of nodes with different configurations and roles. You can use pre-provisioned or unprovisioned nodes in an OpenStackDataPlaneNodeSet
CR:
- Pre-provisioned node: You have used your own tooling to install the operating system on the node before adding it to the data plane.
- Unprovisioned node: The node does not have an operating system installed before you add it to the data plane. The node is provisioned by using the Cluster Baremetal Operator (CBO) as part of the data plane creation and deployment process.
You cannot include both pre-provisioned and unprovisioned nodes in the same OpenStackDataPlaneNodeSet CR.
To create and deploy a data plane, you must perform the following tasks:
-
Create a
Secret
CR for each node set for Ansible to use to execute commands on the data plane nodes. -
Create the
OpenStackDataPlaneNodeSet
CRs that define the nodes and layout of the data plane. -
Create the
OpenStackDataPlaneDeployment
CR that triggers the Ansible execution that deploys and configures the software for the specified list ofOpenStackDataPlaneNodeSet
CRs.
The following procedures create two simple node sets, one with pre-provisioned nodes, and one with bare-metal nodes that must be provisioned during the node set deployment. The procedures aim to get you up and running quickly with a data plane environment that you can use to troubleshoot issues and test the environment before adding all the customizations you require. You can add additional node sets to a deployed environment, and you can customize your deployed environment by updating the common configuration in the default ConfigMap
CR for the service, and by creating custom services. For more information on how to customize your data plane after deployment, see the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.
10.1. Prerequisites Copy linkLink copied to clipboard!
- A functional control plane, created with the OpenStack Operator. For more information, see Creating the control plane for NFV environments.
-
You are logged on to a workstation that has access to the Red Hat OpenShift Container Platform (RHOCP) cluster as a user with
cluster-admin
privileges.
10.2. Creating the data plane secrets Copy linkLink copied to clipboard!
The data plane requires several Secret
custom resources (CRs) to operate. The Secret
CRs are used by the data plane nodes for the following functionality:
To enable secure access between nodes:
-
You must generate an SSH key and create an SSH key
Secret
CR for each key to enable Ansible to manage the RHEL nodes on the data plane. Ansible executes commands with this user and key. You can create an SSH key for eachOpenStackDataPlaneNodeSet
CR in your data plane. -
You must generate an SSH key and create an SSH key
Secret
CR for each key to enable migration of instances between Compute nodes.
-
You must generate an SSH key and create an SSH key
- To register the operating system of the nodes that are not registered to the Red Hat Customer Portal.
- To enable repositories for the nodes.
- To provide Compute nodes with access to libvirt.
Prerequisites
-
Pre-provisioned nodes are configured with an SSH public key in the
$HOME/.ssh/authorized_keys
file for a user with passwordlesssudo
privileges. For more information, see Managing sudo access in the RHEL Configuring basic system settings guide.
Procedure
For unprovisioned nodes, create the SSH key pair for Ansible:
ssh-keygen -f <key_file_name> -N "" -t rsa -b 4096
$ ssh-keygen -f <key_file_name> -N "" -t rsa -b 4096
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<key_file_name>
with the name to use for the key pair.
-
Replace
Create the
Secret
CR for Ansible and apply it to the cluster:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<key_file_name>
with the name and location of your SSH key pair file. -
Optional: Only include the
--from-file=authorized_keys
option for bare-metal nodes that must be provisioned when creating the data plane.
-
Replace
If you are creating Compute nodes, create a secret for migration.
Create the SSH key pair for instance migration:
ssh-keygen -f ./nova-migration-ssh-key -t ecdsa-sha2-nistp521 -N ''
$ ssh-keygen -f ./nova-migration-ssh-key -t ecdsa-sha2-nistp521 -N ''
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
Secret
CR for migration and apply it to the cluster:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For nodes that have not been registered to the Red Hat Customer Portal, create the
Secret
CR for subscription-manager credentials to register the nodes:oc create secret generic subscription-manager \ --from-literal rhc_auth='{"login": {"username": "<subscription_manager_username>", "password": "<subscription_manager_password>"}}'
$ oc create secret generic subscription-manager \ --from-literal rhc_auth='{"login": {"username": "<subscription_manager_username>", "password": "<subscription_manager_password>"}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<subscription_manager_username>
with the username you set forsubscription-manager
. -
Replace
<subscription_manager_password>
with the password you set forsubscription-manager
.
-
Replace
Create a
Secret
CR that contains the Red Hat registry credentials:oc create secret generic redhat-registry --from-literal edpm_container_registry_logins='{"registry.redhat.io": {"<username>": "<password>"}}'
$ oc create secret generic redhat-registry --from-literal edpm_container_registry_logins='{"registry.redhat.io": {"<username>": "<password>"}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<username>
and<password>
with your Red Hat registry username and password credentials.For information about how to create your registry service account, see the Knowledge Base article Creating Registry Service Accounts.
If you are creating Compute nodes, create a secret for libvirt.
Create a file on your workstation named
secret_libvirt.yaml
to define the libvirt secret:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<base64_password>
with a base64-encoded string with maximum length 63 characters. You can use the following command to generate a base64-encoded password:echo -n <password> | base64
$ echo -n <password> | base64
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipIf you do not want to base64-encode the username and password, you can use the
stringData
field instead of thedata
field to set the username and password.
Create the
Secret
CR:oc apply -f secret_libvirt.yaml -n openstack
$ oc apply -f secret_libvirt.yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify that the
Secret
CRs are created:oc describe secret dataplane-ansible-ssh-private-key-secret oc describe secret nova-migration-ssh-key oc describe secret subscription-manager oc describe secret redhat-registry oc describe secret libvirt-secret
$ oc describe secret dataplane-ansible-ssh-private-key-secret $ oc describe secret nova-migration-ssh-key $ oc describe secret subscription-manager $ oc describe secret redhat-registry $ oc describe secret libvirt-secret
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.3. Creating a custom SR-IOV Compute service Copy linkLink copied to clipboard!
You must create a custom SR-IOV Compute service for NFV in a Red Hat OpenStack Services on OpenShift (RHOSO) environment. This service is an Ansible service that is executed on the data plane. This custom service performs the following tasks on the SR-IOV Compute nodes:
- Applies CPU pinning parameters.
- Performs PCI passthrough.
To create the SR-IOV custom service, you must perform these actions:
-
Create a
ConfigMap
for CPU pinning that maps a CPU pinning configuration to a specified set of SR-IOV Compute nodes. -
Create a
ConfigMap
for PCI passthrough that maps a PCI passthrough configuration to a specified set of SR-IOV Compute nodes. -
Create the actual SR-IOV custom service that will implement the
configMaps
on your data plane.
Prerequisites
-
You have the
oc
command line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-admin
privileges.
Procedure
Create a
ConfigMap
CR that defines configurations for CPU pinning and PCI passthrough, and save it to a YAML file on your workstation, for example,pinning-passthrough.yaml
.Change the values (in boldface) as appropriate for your environment:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
cpu_shared_set
: enter a comma-separated list or range of physical host CPU numbers used to provide vCPU inventory, determine the host CPUs that unpinned instances can be scheduled to, and determine the host CPUs that instance emulator threads should be offloaded to for instances configured with the share emulator thread policy. -
cpu_dedicated_set
: enter a comma-separated list or range of physical host CPU numbers to which processes for pinned instance CPUs can be scheduled. For example,4-12,^8,15
reserves cores from 4-12 and 15, excluding 8. -
<network_name_n_>
: replace<network_name1>
and<network_name2>
with the names of the physical networks that your gateways are on. (This network is set in the neutron networkprovider:*name
field.) <ID list>
: replace<ID list>
with a comma-separated list of IDs of the NUMA nodes associated with this physnet. For example:0,1
. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow This configuration ensures that instances using one or more L2-type networks with
provider:physical_network=foo`
must be scheduled on host cores from NUMA node 0, while instances using one or more networks withprovider:physical_network=bar`
must be scheduled on host cores from both NUMA nodes 2 and 3. For the latter case, it will be necessary to split the guest across two or more host NUMA nodes using thehw:numa_nodes extra`
spec.-
passthrough_whitelist
: specify valid NIC addresses and names for"address"
and"physical_network"
.
-
Create the
ConfigMap
object, using theConfigMap
CR file:- Example
oc create -f sriov-pinning-passthru.yaml -n openstack
$ oc create -f sriov-pinning-passthru.yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create an
OpenStackDataPlaneService
CR that defines the SR-IOV custom service, and save it to a YAML file on your workstation, for examplenova-custom-sriov.yaml
:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: nova-custom-sriov
apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: nova-custom-sriov
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
ConfigMap
CRs to the custom service, and specify theSecret
CR for the cell that the node set that runs this service connects to:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the Ansible commands to create the custom service, by referencing an Ansible playbook or by including the Ansible play in the
playbookContents
field:Copy to Clipboard Copied! Toggle word wrap Toggle overflow playbook
: identifies the default playbook available for your service.In this case, it is the Compute service (nova). To see the listing of default playbooks, see https://openstack-k8s-operators.github.io/edpm-ansible/playbooks.html.
Create the
custom-nova-sriov
service:oc apply -f nova-custom-sriov.yaml -n openstack
$ oc apply -f nova-custom-sriov.yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the custom service is created:
oc get openstackdataplaneservice nova-custom-sriov -o yaml -n openstack
$ oc get openstackdataplaneservice nova-custom-sriov -o yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.4. Creating a custom OVS-DPDK Compute service Copy linkLink copied to clipboard!
You must create a custom OVS-DPDK Compute service for NFV in a Red Hat OpenStack Services on OpenShift (RHOSO) environment. This service is an Ansible service that is executed on the data plane. This custom service applies various parameters on the OVS-DPDK Compute nodes, including CPU pinning parameters, a block migration parameter, and a NUMA-aware vswitch feature that allows instances to spawn in the NUMA node that is connected to the NIC used by the OVS bridge.
To create the OVS-DPDK custom service, you must perform these actions:
-
Create a
ConfigMap
that maps the configurations to a specified set of OVS-DPDK Compute nodes. -
Create the actual OVS-DPDK custom service that will implement the
ConfigMap
on your data plane.
Prerequisites
-
You have the
oc
command line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-admin
privileges.
Procedure
Create a
ConfigMap
CR that defines a configuration for the parameters, and save it to a YAML file on your workstation, for example,dpdk-custom.yaml
.Change the values (in boldface) as appropriate for your environment:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
cpu_shared_set
: enter a comma-separated list or range of physical host CPU numbers used to provide vCPU inventory, determine the host CPUs that unpinned instances can be scheduled to, and determine the host CPUs that instance emulator threads should be offloaded to for instances configured with the share emulator thread policy. -
cpu_dedicated_set
: enter a comma-separated list or range of physical host CPU numbers to which processes for pinned instance CPUs can be scheduled. For example,4-12,^8,15
reserves cores from 4-12 and 15, excluding 8. -
<network_name_n_>
: replace<network_name1>
and<network_name2>
with the names of the physical networks that your gateways are on, for which you need to configure NUMA affinity. (This network is set in the neutron networkprovider:*name
field.) <ID list>
: replace<ID list>
with a comma-separated list of IDs of the NUMA nodes associated with this physnet. For example:0,1
. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow This configuration ensures that instances using one or more L2-type networks with
provider:physical_network=foo`
must be scheduled on host cores from NUMA node 0, while instances using one or more networks withprovider:physical_network=bar`
must be scheduled on host cores from both NUMA nodes 2 and 3. For the latter case, it will be necessary to split the guest across two or more host NUMA nodes using thehw:numa_nodes extra`
spec.-
live_migration_permit_post_copy=false
: necessary for succesful block live migration of instances attached to a Geneve network with DPDK.
-
Create the
ConfigMap
object, using theConfigMap
CR file:- Example
oc create -f dpdk-pinning.yaml -n openstack
$ oc create -f dpdk-pinning.yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create an
OpenStackDataPlaneService
CR that defines the OVS-DPDK custom service, and save it to a YAML file on your workstation, for examplenova-custom-ovsdpdk.yaml
:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: nova-custom-ovsdpdk
apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: nova-custom-ovsdpdk
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
ConfigMap
CR to the custom service, and specify theSecret
CR for the cell that the node set that runs this service connects to:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the Ansible commands to create the custom service, by referencing an Ansible playbook or by including the Ansible play in the
playbookContents
field:Copy to Clipboard Copied! Toggle word wrap Toggle overflow playbook
: identifies the default playbook available for your service.In this case, it is the Compute service (nova). To see the listing of default playbooks, see https://openstack-k8s-operators.github.io/edpm-ansible/playbooks.html.
Create the
nova-custom-ovsdpdk
service:oc apply -f nova-custom-ovsdpdk.yaml -n openstack
$ oc apply -f nova-custom-ovsdpdk.yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the custom service is created:
oc get openstackdataplaneservice nova-custom-ovsdpdk -o yaml -n openstack
$ oc get openstackdataplaneservice nova-custom-ovsdpdk -o yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.5. Creating a set of data plane nodes with pre-provisioned nodes Copy linkLink copied to clipboard!
Define an OpenStackDataPlaneNodeSet
custom resource (CR) for each logical grouping of pre-provisioned nodes in your data plane, for example, nodes grouped by hardware, location, or networking. You can define as many node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet
CR. Each node set can be connected to only one Compute cell. By default, node sets are connected to cell1
. If you customize your control plane to include additional Compute cells, you must specify the cell to which the node set is connected. For more information on adding Compute cells, see Connecting an OpenStackDataPlaneNodeSet
CR to a Compute cell in the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.
You use the nodeTemplate
field to configure the properties that all nodes in an OpenStackDataPlaneNodeSet
CR share, and the nodeTemplate.nodes
field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate
.
Procedure
Create a file on your workstation named
openstack_preprovisioned_node_set.yaml
to define theOpenStackDataPlaneNodeSet
CR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
OpenStackDataPlaneNodeSet
CR name must be unique, contain only lower case alphanumeric characters and-
(hyphens) or.
(periods), start and end with an alphanumeric character, and have a maximum length of 53 characters. Update the name in this example to a name that reflects the nodes in the set.
Specify that the nodes in this set are pre-provisioned:
preProvisioned: true
preProvisioned: true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the SSH key secret that you created to enable Ansible to connect to the data plane nodes:
nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key>
nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<secret-key>
with the name of the SSH keySecret
CR you created for this node set in Creating the data plane secrets, for example,dataplane-ansible-ssh-private-key-secret
.
-
Replace
-
Create a Persistent Volume Claim (PVC) in the
openstack
namespace on your Red Hat OpenShift Container Platform (RHOCP) cluster to store logs. Set thevolumeMode
toFilesystem
andaccessModes
toReadWriteOnce
. Do not request storage for logs from a PersistentVolume (PV) that uses the NFS volume plugin. NFS is incompatible with FIFO and theansible-runner
creates a FIFO file to write to store logs. For information about PVCs, see Understanding persistent storage in the RHOCP Storage guide and Red Hat OpenShift Container Platform cluster requirements in Planning your deployment. Enable persistent logging for the data plane nodes:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<pvc_name>
with the name of the PVC storage on your RHOCP cluster.
-
Replace
-
Add the common configuration for the set of nodes in this group under the
nodeTemplate
section. Each node in thisOpenStackDataPlaneNodeSet
inherits this configuration. For information about the properties you can use to configure common node attributes, seeOpenStackDataPlaneNodeSet
CRspec
properties. Register the operating system of the nodes that are not registered to the Red Hat Customer Portal, and enable repositories for your nodes. The following steps demonstrate how to register your nodes to CDN. For details on how to register your nodes with Red Hat Satellite 6.13, see Managing Hosts.
Create a
Secret
CR that contains thesubscription-manager
credentials:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
Secret
CR that contains the Red Hat registry credentials:oc create secret generic redhat-registry --from-literal edpm_container_registry_logins='{"registry.redhat.io": {"<username>": "<password>"}}'
$ oc create secret generic redhat-registry --from-literal edpm_container_registry_logins='{"registry.redhat.io": {"<username>": "<password>"}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<username>
and<password>
with your Red Hat registry username and password credentials.
-
Replace
For information about how to create your registry service account, see the Red Hat Knowledgebase article Creating Registry Service Accounts.
Specify the
Secret
CRs to use to source the usernames and passwords:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Define each node in this node set:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The node definition reference, for example,
edpm-compute-0
. Each node in the node set must have a node definition. - 2
- Defines the IPAM and the DNS records for the node.
- 3
- Specifies a predictable IP address for the network that must be in the allocation range defined for the network in the
NetConfig
CR. - 4
- Node-specific Ansible variables that customize the node.
Note-
Nodes defined within the
nodes
section can configure the same Ansible variables that are configured in thenodeTemplate
section. Where an Ansible variable is configured for both a specific node and within thenodeTemplate
section, the node-specific values override those from thenodeTemplate
section. -
You do not need to replicate all the
nodeTemplate
Ansible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node. -
Many
ansibleVars
includeedpm
in the name, which stands for "External Data Plane Management".
- Sets the
os-net-config
provider tonmstate
. The default value isfalse
. Change it totrue
unless a specific limitation of thenmstate
provider requires you to use theifcfg
provider. For more information on advantages and limitations of thenmstate
provider, see https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/planning_your_deployment/plan-networks_planning#plan-os-net-config_plan-network in Planning your deployment.
For more information, see:
-
Save the
openstack_preprovisioned_node_set.yaml
definition file. Create the data plane resources:
oc create -f openstack_preprovisioned_node_set.yaml -n openstack
$ oc create -f openstack_preprovisioned_node_set.yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the data plane resources have been created:
oc get openstackdataplanenodeset -n openstack
$ oc get openstackdataplanenodeset -n openstack NAME STATUS MESSAGE openstack-data-plane False Deployment not started
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For information about the meaning of the returned status, see Data plane conditions and states.
Verify that the
Secret
resource was created for the node set:oc get secret | grep openstack-data-plane
$ oc get secret | grep openstack-data-plane dataplanenodeset-openstack-data-plane Opaque 1 3m50s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the services were created:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.5.1. Example OpenStackDataPlaneNodeSet CR for pre-provisioned nodes Copy linkLink copied to clipboard!
The following example OpenStackDataPlaneNodeSet
CR creates a node set from pre-provisioned Compute nodes with some node-specific configuration. The example includes optional fields. Review the example and update the optional fields to the correct values for your environment or remove them before using the example in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment.
Update the name of the OpenStackDataPlaneNodeSet
CR in this example to a name that reflects the nodes in the set. The OpenStackDataPlaneNodeSet
CR name must be unique, contain only lower case alphanumeric characters and -
(hyphens) or .
(periods), start and end with an alphanumeric character, and have a maximum length of 53 characters.
The following variables are autogenerated from IPAM and DNS and are not provided by the user:
-
ctlplane_dns_nameservers
-
dns_search_domains
-
ctlplane_host_routes
10.6. Creating a set of data plane nodes with unprovisioned nodes Copy linkLink copied to clipboard!
Define an OpenStackDataPlaneNodeSet
custom resource (CR) for each logical grouping of unprovisioned nodes in your data plane, for example, nodes grouped by hardware, location, or networking. You can define as many node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet
CR. Each node set can be connected to only one Compute cell. By default, node sets are connected to cell1
. If you customize your control plane to include additional Compute cells, you must specify the cell to which the node set is connected. For more information on adding Compute cells, see Connecting an OpenStackDataPlaneNodeSet
CR to a Compute cell in the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.
You use the nodeTemplate
field to configure the properties that all nodes in an OpenStackDataPlaneNodeSet
CR share, and the nodeTemplate.nodes
field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate
.
For more information about provisioning bare-metal nodes, see Planning provisioning for bare-metal data plane nodes in Planning your deployment.
Prerequisites
- Cluster Baremetal Operator (CBO) is installed and configured for provisioning. For more information, see Planning provisioning for bare-metal data plane nodes in Planning your deployment.
-
A
BareMetalHost
CR is registered and inspected for each bare-metal data plane node. Each bare-metal node must be in theAvailable
state after inspection. For more information about configuring bare-metal nodes, see Bare metal configuration in the Red Hat OpenShift Container Platform (RHOCP) Postinstallation configuration guide.
Procedure
Create a file on your workstation named
openstack_unprovisioned_node_set.yaml
to define theOpenStackDataPlaneNodeSet
CR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
OpenStackDataPlaneNodeSet
CR name must be unique, contain only lower case alphanumeric characters and-
(hyphens) or.
(periods), start and end with an alphanumeric character, and have a maximum length of 53 characters. Update the name in this example to a name that reflects the nodes in the set.
Define the
baremetalSetTemplate
field to describe the configuration of the bare-metal nodes:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<bmh_namespace>
with the namespace defined in the correspondingBareMetalHost
CR for the node. -
Replace
<ansible_ssh_user>
with the username of the Ansible SSH user. -
Replace
<bmh_label>
with the metadata label defined in the correspondingBareMetalHost
CR for the node, for example,openstack
. Metadata labels, such asapp
,workload
, andnodeName
are key-value pairs for labelling nodes. Set thebmhLabelSelector
field to select data plane nodes based on one or more labels that match the labels in the correspondingBareMetalHost
CR. -
Replace
<interface>
with the control plane interface the node connects to, for example,enp6s0
.
-
Replace
The BMO manages
BareMetalHost
CRs in theopenshift-machine-api
namespace by default. You must update theProvisioning
CR to watch all namespaces:oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"watchAllNamespaces": true }}'
$ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"watchAllNamespaces": true }}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the SSH key secret that you created to enable Ansible to connect to the data plane nodes:
nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key>
nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<secret-key>
with the name of the SSH keySecret
CR you created in Creating the data plane secrets, for example,dataplane-ansible-ssh-private-key-secret
.
-
Replace
-
Create a Persistent Volume Claim (PVC) in the
openstack
namespace on your RHOCP cluster to store logs. Set thevolumeMode
toFilesystem
andaccessModes
toReadWriteOnce
. Do not request storage for logs from a PersistentVolume (PV) that uses the NFS volume plugin. NFS is incompatible with FIFO and theansible-runner
creates a FIFO file to write to store logs. For information about PVCs, see Understanding persistent storage in the RHOCP Storage guide and Red Hat OpenShift Container Platform cluster requirements in Planning your deployment. Enable persistent logging for the data plane nodes:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<pvc_name>
with the name of the PVC storage on your RHOCP cluster.
-
Replace
Add the common configuration for the set of nodes in this group under the
nodeTemplate
section. Each node in thisOpenStackDataPlaneNodeSet
inherits this configuration.For more information, see:
Define each node in this node set:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The node definition reference, for example,
edpm-compute-0
. Each node in the node set must have a node definition. - 2
- Defines the IPAM and the DNS records for the node.
- 3
- Specifies a predictable IP address for the network that must be in the allocation range defined for the network in the
NetConfig
CR. - 4
- Node-specific Ansible variables that customize the node.
Note-
Nodes defined within the
nodes
section can configure the same Ansible variables that are configured in thenodeTemplate
section. Where an Ansible variable is configured for both a specific node and within thenodeTemplate
section, the node-specific values override those from thenodeTemplate
section. -
You do not need to replicate all the
nodeTemplate
Ansible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node. -
Many
ansibleVars
includeedpm
in the name, which stands for "External Data Plane Management".
- Sets the
os-net-config
provider tonmstate
. The default value isfalse
. Change it totrue
unless a specific limitation of thenmstate
provider requires you to use theifcfg
provider. For more information on advantages and limitations of thenmstate
provider, see https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/planning_your_deployment/plan-networks_planning#plan-os-net-config_plan-network in Planning your deployment.
For information about the properties you can use to configure node attributes, see
OpenStackDataPlaneNodeSet
CR properties.-
Save the
openstack_unprovisioned_node_set.yaml
definition file. Create the data plane resources:
oc create -f openstack_unprovisioned_node_set.yaml -n openstack
$ oc create -f openstack_unprovisioned_node_set.yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the data plane resources have been created:
oc get openstackdataplanenodeset -n openstack
$ oc get openstackdataplanenodeset -n openstack NAME STATUS MESSAGE openstack-data-plane False Deployment not started
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For information on the meaning of the returned status, see Data plane conditions and states.
Verify that the
Secret
resource was created for the node set:oc get secret -n openstack | grep openstack-data-plane
$ oc get secret -n openstack | grep openstack-data-plane dataplanenodeset-openstack-data-plane Opaque 1 3m50s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the services were created:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.6.1. Example OpenStackDataPlaneNodeSet CR for unprovisioned nodes Copy linkLink copied to clipboard!
The following example OpenStackDataPlaneNodeSet
CR creates a node set from unprovisioned Compute nodes with some node-specific configuration. The unprovisioned Compute nodes are provisioned when the node set is created. The example includes optional fields. Review the example and update the optional fields to the correct values for your environment or remove them before using the example in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment.
Update the name of the OpenStackDataPlaneNodeSet
CR in this example to a name that reflects the nodes in the set. The OpenStackDataPlaneNodeSet
CR name must be unique, contain only lower case alphanumeric characters and -
(hyphens) or .
(periods), start and end with an alphanumeric character, and have a maximum length of 53 characters.
The following variables are autogenerated from IPAM and DNS and are not provided by the user:
-
ctlplane_dns_nameservers
-
dns_search_domains
-
ctlplane_host_routes
10.7. OpenStackDataPlaneNodeSet CR spec properties Copy linkLink copied to clipboard!
The following sections detail the OpenStackDataPlaneNodeSet
CR spec
properties you can configure.
10.7.1. nodeTemplate Copy linkLink copied to clipboard!
Defines the common attributes for the nodes in this OpenStackDataPlaneNodeSet
. You can override these common attributes in the definition for each individual node.
Field | Description |
---|---|
| Name of the private SSH key secret that contains the private SSH key for connecting to nodes. Secret name format: Secret.data.ssh-privatekey For more information, see Creating an SSH authentication secret.
Default: |
|
Name of the network to use for management (SSH/Ansible). Default: |
|
Network definitions for the |
|
Ansible configuration options. For more information, see |
| The files to mount into an Ansible Execution Pod. |
|
UserData configuration for the |
|
NetworkData configuration for the |
10.7.2. nodes Copy linkLink copied to clipboard!
Defines the node names and node-specific attributes for the nodes in this OpenStackDataPlaneNodeSet
. Overrides the common attributes defined in the nodeTemplate
.
Field | Description |
---|---|
|
Ansible configuration options. For more information, see |
| The files to mount into an Ansible Execution Pod. |
| The node name. |
| Name of the network to use for management (SSH/Ansible). |
| NetworkData configuration for the node. |
| Instance networks. |
| Node-specific user data. |
10.7.3. ansible Copy linkLink copied to clipboard!
Defines the group of Ansible configuration options.
Field | Description |
---|---|
|
The user associated with the secret you created in Creating the data plane secrets. Default: |
| SSH host for the Ansible connection. |
| SSH port for the Ansible connection. |
|
The Ansible variables that customize the set of nodes. You can use this property to configure any custom Ansible variable, including the Ansible variables available for each Note
The |
|
A list of sources to populate Ansible variables from. Values defined by an |
10.7.4. ansibleVarsFrom Copy linkLink copied to clipboard!
Defines the list of sources to populate Ansible variables from.
Field | Description |
---|---|
|
An optional identifier to prepend to each key in the |
|
The |
|
The |
10.8. Network interface configuration options Copy linkLink copied to clipboard!
Use the following tables to understand the available options for configuring network interfaces for Red Hat OpenStack Services on OpenShift (RHOSO) environments.
Linux bridges are not supported in RHOSO. Instead, use methods such as Linux bonds and dedicated NICs for RHOSO traffic.
10.8.1. interface Copy linkLink copied to clipboard!
Defines a single network interface. The network interface name
uses either the actual interface name (eth0
, eth1
, enp0s25
) or a set of numbered interfaces (nic1
, nic2
, nic3
). The network interfaces of hosts within a role do not have to be exactly the same when you use numbered interfaces such as nic1
and nic2
, instead of named interfaces such as eth0
and eno2
. For example, one host might have interfaces em1
and em2
, while another has eno1
and eno2
, but you can refer to the NICs of both hosts as nic1
and nic2
.
The order of numbered interfaces corresponds to the order of named network interface types:
ethX
interfaces, such aseth0
,eth1
, and so on.Names appear in this format when consistent device naming is turned off in
udev
.enoX
andemX
interfaces, such aseno0
,eno1
,em0
,em1
, and so on.These are usually on-board interfaces.
enX
and any other interfaces, sorted alpha numerically, such asenp3s0
,enp3s1
,ens3
, and so on.These are usually add-on interfaces.
The numbered NIC scheme includes only live interfaces, for example, if the interfaces have a cable attached to the switch. If you have some hosts with four interfaces and some with six interfaces, use nic1
to nic4
and attach only four cables on each host.
Option | Default | Description |
---|---|---|
|
Name of the interface. The network interface | |
| False | Use DHCP to get an IP address. |
| False | Use DHCP to get a v6 IP address. |
| A list of IP addresses assigned to the interface. | |
| A list of routes assigned to the interface. For more information, see Section 10.8.7, “routes”. | |
| 1500 | The maximum transmission unit (MTU) of the connection. |
| False |
Defines the interface as the primary interface. Required only when the |
| False | Write the device alias configuration instead of the system names. |
| None | Arguments that you want to pass to the DHCP client. |
| None | List of DNS servers that you want to use for the interface. |
|
Set this option to |
- Example
10.8.2. vlan Copy linkLink copied to clipboard!
Defines a VLAN. Use the VLAN ID and subnet passed from the parameters
section.
vlan
options
Option | Default | Description |
---|---|---|
vlan_id | The VLAN ID. | |
device | The parent device to attach the VLAN. Use this parameter when the VLAN is not a member of an OVS bridge. For example, use this parameter to attach the VLAN to a bonded interface device. | |
use_dhcp | False | Use DHCP to get an IP address. |
use_dhcpv6 | False | Use DHCP to get a v6 IP address. |
addresses | A list of IP addresses assigned to the VLAN. | |
routes | A list of routes assigned to the VLAN. For more information, see Section 10.8.7, “routes”. | |
mtu | 1500 | The maximum transmission unit (MTU) of the connection. |
primary | False | Defines the VLAN as the primary interface. |
persist_mapping | False | Write the device alias configuration instead of the system names. |
dhclient_args | None | Arguments that you want to pass to the DHCP client. |
dns_servers | None | List of DNS servers that you want to use for the VLAN. |
- Example
10.8.3. ovs_bridge Copy linkLink copied to clipboard!
Defines a bridge in Open vSwitch (OVS), which connects multiple interface
, ovs_bond
, and vlan
objects together.
The network interface type, ovs_bridge
, takes a parameter name
.
Placing Control group networks on the ovs_bridge
interface can cause down time. The OVS bridge connects to the Networking service (neutron) server to obtain configuration data. If the OpenStack control traffic, typically the Control Plane and Internal API networks, is placed on an OVS bridge, then connectivity to the neutron server is lost whenever you upgrade OVS, or the OVS bridge is restarted by the admin user or process. If downtime is not acceptable in these circumstances, then you must place the Control group networks on a separate interface or bond rather than on an OVS bridge:
- You can achieve a minimal setting when you put the Internal API network on a VLAN on the provisioning interface and the OVS bridge on a second interface.
- To implement bonding, you need at least two bonds (four network interfaces). Place the control group on a Linux bond. If the switch does not support LACP fallback to a single interface for PXE boot, then this solution requires at least five NICs.
If you have multiple bridges, you must use distinct bridge names other than accepting the default name of bridge_name
. If you do not use distinct names, then during the converge phase, two network bonds are placed on the same bridge.
ovs_bridge
options
Option | Default | Description |
---|---|---|
name | Name of the bridge. | |
use_dhcp | False | Use DHCP to get an IP address. |
use_dhcpv6 | False | Use DHCP to get a v6 IP address. |
addresses | A list of IP addresses assigned to the bridge. | |
routes | A list of routes assigned to the bridge. For more information, see Section 10.8.7, “routes”. | |
mtu | 1500 | The maximum transmission unit (MTU) of the connection. |
members | A sequence of interface, VLAN, and bond objects that you want to use in the bridge. | |
ovs_options | A set of options to pass to OVS when creating the bridge. | |
ovs_extra | A set of options to to set as the OVS_EXTRA parameter in the network configuration file of the bridge. | |
defroute | True |
Use a default route provided by the DHCP service. Only applies when you enable |
persist_mapping | False | Write the device alias configuration instead of the system names. |
dhclient_args | None | Arguments that you want to pass to the DHCP client. |
dns_servers | None | List of DNS servers that you want to use for the bridge. |
- Example
10.8.4. Network interface bonding Copy linkLink copied to clipboard!
You can bundle multiple physical NICs together to form a single logical channel known as a bond. You can configure bonds to provide redundancy for high availability systems or increased throughput.
Red Hat OpenStack Platform supports Open vSwitch (OVS) kernel bonds, OVS-DPDK bonds, and Linux kernel bonds.
Bond type | Type value | Allowed bridge types | Allowed members |
---|---|---|---|
OVS kernel bonds |
|
|
|
OVS-DPDK bonds |
|
|
|
Linux kernel bonds |
|
|
|
Do not combine ovs_bridge
and ovs_user_bridge
on the same node.
ovs_bond
-
Defines a bond in Open vSwitch (OVS) to join two or more
interfaces
together. This helps with redundancy and increases bandwidth.
Option | Default | Description |
---|---|---|
name | Name of the bond. | |
use_dhcp | False | Use DHCP to get an IP address. |
use_dhcpv6 | False | Use DHCP to get a v6 IP address. |
addresses | A list of IP addresses assigned to the bond. | |
routes | A list of routes assigned to the bond. For more information, see Section 10.8.7, “routes”. | |
mtu | 1500 | The maximum transmission unit (MTU) of the connection. |
primary | False | Defines the interface as the primary interface. |
members | A sequence of interface objects that you want to use in the bond. | |
ovs_options |
A set of options to pass to OVS when creating the bond. For more information, see Table 10.8, “ | |
ovs_extra | A set of options to set as the OVS_EXTRA parameter in the network configuration file of the bond. | |
defroute | True |
Use a default route provided by the DHCP service. Only applies when you enable |
persist_mapping | False | Write the device alias configuration instead of the system names. |
dhclient_args | None | Arguments that you want to pass to the DHCP client. |
dns_servers | None | List of DNS servers that you want to use for the bond. |
ovs_option | Description |
---|---|
|
Source load balancing (slb) balances flows based on source MAC address and output VLAN, with periodic rebalancing as traffic patterns change. When you configure a bond with the |
|
When you configure a bond using |
|
Controls the Link Aggregation Control Protocol (LACP) behavior. Only certain switches support LACP. If your switch does not support LACP, use |
| Set active-backup as the bond mode if LACP fails. |
| Set the LACP heartbeat to one second (fast) or 30 seconds (slow). The default is slow. |
| Set the link detection to use miimon heartbeats (miimon) or monitor carrier (carrier). The default is carrier. |
| If using miimon, set the heartbeat interval (milliseconds). |
| Set the interval (milliseconds) that a link must be up to be activated to prevent flapping. |
| Set the interval (milliseconds) that flows are rebalancing between bond members. Set this value to zero to disable flow rebalancing between bond members. |
- Example - OVS bond
- Example - OVS DPDK bond
- In this example, a bond is created as part of an OVS user space bridge:
10.8.5. LACP with OVS bonding modes Copy linkLink copied to clipboard!
You can use Open vSwitch (OVS) bonds with the optional Link Aggregation Control Protocol (LACP). LACP is a negotiation protocol that creates a dynamic bond for load balancing and fault tolerance.
Use the following table to understand support compatibility for OVS kernel and OVS-DPDK bonded interfaces in conjunction with LACP options.
Do not use OVS bonds on control and storage networkes. Instead, use Linux bonds with VLAN and LACP.
If you use OVS bonds, and restart the OVS or the neutron agent for updates, hot fixes, and other events, the control plane can be disrupted.
Objective | OVS bond mode | Compatible LACP options | Notes |
High availability (active-passive) |
|
| |
Increased throughput (active-active) |
|
|
|
|
|
|
10.8.6. linux_bond Copy linkLink copied to clipboard!
Defines a Linux bond that joins two or more interfaces
together. This helps with redundancy and increases bandwidth. Ensure that you include the kernel-based bonding options in the bonding_options
parameter.
Option | Default | Description |
---|---|---|
name | Name of the bond. | |
use_dhcp | False | Use DHCP to get an IP address. |
use_dhcpv6 | False | Use DHCP to get a v6 IP address. |
addresses | A list of IP addresses assigned to the bond. | |
routes | A list of routes assigned to the bond. See Section 10.8.7, “routes”. | |
mtu | 1500 | The maximum transmission unit (MTU) of the connection. |
members | A sequence of interface objects that you want to use in the bond. | |
bonding_options | A set of options when creating the bond. See ???TITLE???. | |
defroute | True |
Use a default route provided by the DHCP service. Only applies when you enable |
persist_mapping | False | Write the device alias configuration instead of the system names. |
dhclient_args | None | Arguments that you want to pass to the DHCP client. |
dns_servers | None | List of DNS servers that you want to use for the bond. |
bonding_options
parameters for Linux bonds-
The
bonding_options
parameter sets the specific bonding options for the Linux bond. See the Linux bonding examples that follow this table:
bonding_options | Description |
---|---|
|
Sets the bonding mode, which in the example is |
| Defines whether LACP packets are sent every 1 second, or every 30 seconds. |
| Defines the minimum amount of time that an interface must be active before it is used for traffic. This minimum configuration helps to mitigate port flapping outages. |
| The interval in milliseconds that is used for monitoring the port state using the MIIMON functionality of the driver. |
- Example - Linux bond
- Example - Linux bond: bonding two interfaces
- Example - Linux bond set to
active-backup
mode with one VLAN
- Example - Linux bond on OVS bridge
-
In this example, the bond is set to
802.3ad
with LACP mode and one VLAN:
10.8.7. routes Copy linkLink copied to clipboard!
Defines a list of routes to apply to a network interface, VLAN, bridge, or bond.
Option | Default | Description |
---|---|---|
ip_netmask | None | IP and netmask of the destination network. |
default | False |
Sets this route to a default route. Equivalent to setting |
next_hop | None | The IP address of the router used to reach the destination network. |
- Example - routes
10.9. Example custom network interfaces for NFV Copy linkLink copied to clipboard!
The following examples illustrates how you can use a template to customize network interfaces for NFV in Red Hat OpenStack Services on OpenShift (RHOSO) environments.
10.9.1. Example template - non-partitioned NIC Copy linkLink copied to clipboard!
This template example configures the RHOSO networks on a NIC that is not partitioned.
- 1 2
edpm-compute-n
: defines theedpm_network_config_os_net_config_mappings
variable to map the actual NICs. You identify each NIC by specifying the MAC address or the device name on each compute node to the NIC ID that the RHOSOos-net-config
tool uses which is typically, `nic`n.- 3
linux_bond
: creates a control-plane Linux bond for an isolated network. In this example, a Linux bond is created with mode active-backup onnic3
andnic4
.- 4 5
type: vlan
: assign VLANs to Linux bonds. In this example, the VLAN ID of theinternalapi
andstorage
networks is assigned tobond-api
.- 6
ovs_user_bridge
: set a bridge with OVS-DPDK ports. In this example, an OVS user bridge is created with a DPDK bond that has two DPDK ports that corresponds tonic7
andnic8
for the tenant network. A GENEVE tunnel is used.- 7 9 11
sriov_pf
: create SR-IOV VFs. In this example, an interface type ofsriov_pf
is configured as a physical function that the host can use.- 8 10 12
numvfs
: only set the number of VFs that are required.
10.9.2. Example template - partitioned NIC Copy linkLink copied to clipboard!
This template example configures the RHOSO networks on a NIC that is partitioned. This example only shows the portion of the custom resource (CR) definition where the NIC is partitioned.
10.10. Deploying the data plane Copy linkLink copied to clipboard!
You use the OpenStackDataPlaneDeployment
CRD to configure the services on the data plane nodes and deploy the data plane. You control the execution of Ansible on the data plane by creating OpenStackDataPlaneDeployment
custom resources (CRs). Each OpenStackDataPlaneDeployment
CR models a single Ansible execution.
When the OpenStackDataPlaneDeployment
successfully completes execution, it does not automatically execute the Ansible again, even if the OpenStackDataPlaneDeployment
or related OpenStackDataPlaneNodeSet
resources are changed. To start another Ansible execution, you must create another OpenStackDataPlaneDeployment
CR.
Create an OpenStackDataPlaneDeployment
(CR) that deploys each of your OpenStackDataPlaneNodeSet
CRs.
Procedure
Create a file on your workstation named
openstack_data_plane_deploy.yaml
to define theOpenStackDataPlaneDeployment
CR:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack-data-plane
apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack-data-plane
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
OpenStackDataPlaneDeployment
CR name must be unique, must consist of lower case alphanumeric characters,-
(hyphen) or.
(period), and must start and end with an alphanumeric character. Update the name in this example to a name that reflects the node sets in the deployment.
In the list of services, replace
nova
withnova-custom-sriov
,nova-custom-ovsdpdk
, or both:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add all the
OpenStackDataPlaneNodeSet
CRs that you want to deploy.Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<nodeSet_name>
with the names of theOpenStackDataPlaneNodeSet
CRs that you want to include in your data plane deployment.
-
Replace
-
Save the
openstack_data_plane_deploy.yaml
deployment file. Deploy the data plane:
oc create -f openstack_data_plane_deploy.yaml -n openstack
$ oc create -f openstack_data_plane_deploy.yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can view the Ansible logs while the deployment executes:
oc get pod -l app=openstackansibleee -w oc logs -l app=openstackansibleee -f --max-log-requests 10
$ oc get pod -l app=openstackansibleee -w $ oc logs -l app=openstackansibleee -f --max-log-requests 10
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the data plane is deployed:
oc get openstackdataplanedeployment -n openstack
$ oc get openstackdataplanedeployment -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Sample output
NAME STATUS MESSAGE openstack-data-plane True Setup Complete
NAME STATUS MESSAGE openstack-data-plane True Setup Complete
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Repeat the
oc get
command until you see theNodeSet Ready
message:oc get openstackdataplanenodeset -n openstack
$ oc get openstackdataplanenodeset -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Sample output
NAME STATUS MESSAGE openstack-data-plane True NodeSet Ready
NAME STATUS MESSAGE openstack-data-plane True NodeSet Ready
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For information about the meaning of the returned status, see Data plane conditions and states.
If the status indicates that the data plane has not been deployed, then troubleshoot the deployment. For information, see Troubleshooting the data plane creation and deployment.
Map the Compute nodes to the Compute cell that they are connected to:
oc rsh nova-cell0-conductor-0 nova-manage cell_v2 discover_hosts --verbose
$ oc rsh nova-cell0-conductor-0 nova-manage cell_v2 discover_hosts --verbose
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you did not create additional cells, this command maps the Compute nodes to
cell1
.
Verification
Access the remote shell for the
openstackclient
pod and confirm that the deployed Compute nodes are visible on the control plane:oc rsh -n openstack openstackclient openstack hypervisor list
$ oc rsh -n openstack openstackclient $ openstack hypervisor list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.11. Data plane conditions and states Copy linkLink copied to clipboard!
Each data plane resource has a series of conditions within their status
subresource that indicates the overall state of the resource, including its deployment progress.
For an OpenStackDataPlaneNodeSet
, until an OpenStackDataPlaneDeployment
has been started and finished successfully, the Ready
condition is False
. When the deployment succeeds, the Ready
condition is set to True
. A subsequent deployment sets the Ready
condition to False
until the deployment succeeds, when the Ready
condition is set to True
.
Condition | Description |
---|---|
|
|
| "True": All setup tasks for a resource are complete. Setup tasks include verifying the SSH key secret, verifying other fields on the resource, and creating the Ansible inventory for each resource. Each service-specific condition is set to "True" when that service completes deployment. You can check the service conditions to see which services have completed their deployment, or which services failed. |
| "True": The NodeSet has been successfully deployed. |
| "True": The required inputs are available and ready. |
| "True": DNSData resources are ready. |
| "True": The IPSet resources are ready. |
| "True": Bare-metal nodes are provisioned and ready. |
Status field | Description |
---|---|
|
|
| |
|
Condition | Description |
---|---|
|
|
| "True": The data plane is successfully deployed. |
| "True": The required inputs are available and ready. |
|
"True": The deployment has succeeded for the named |
|
"True": The deployment has succeeded for the named |
Status field | Description |
---|---|
|
|
Condition | Description |
---|---|
| "True": The service has been created and is ready for use. "False": The service has failed to be created. |
10.12. Troubleshooting data plane creation and deployment Copy linkLink copied to clipboard!
To troubleshoot a deployment when services are not deploying or operating correctly, you can check the job condition message for the service, and you can check the logs for a node set.
10.12.1. Checking the job condition message for a service Copy linkLink copied to clipboard!
Each data plane deployment in the environment has associated services. Each of these services have a job condition message that matches the current status of the AnsibleEE job executing for that service. You can use this information to troubleshoot deployments when services are not deploying or operating correctly.
Procedure
Determine the name and status of all deployments:
oc get openstackdataplanedeployment
$ oc get openstackdataplanedeployment
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following example output shows two deployments currently in progress:
oc get openstackdataplanedeployment
$ oc get openstackdataplanedeployment NAME NODESETS STATUS MESSAGE edpm-compute ["openstack-edpm-ipam"] False Deployment in progress
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve and inspect Ansible execution jobs.
The Kubernetes jobs are labelled with the name of the
OpenStackDataPlaneDeployment
. You can list jobs for eachOpenStackDataPlaneDeployment
by using the label:Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can check logs by using
oc logs -f job/<job-name>
, for example, if you want to check the logs from the configure-network job:oc logs -f jobs/configure-network-edpm-compute-openstack-edpm-ipam | tail -n2
$ oc logs -f jobs/configure-network-edpm-compute-openstack-edpm-ipam | tail -n2 PLAY RECAP ********************************************************************* edpm-compute-0 : ok=22 changed=0 unreachable=0 failed=0 skipped=17 rescued=0 ignored=0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.12.1.1. Job condition messages Copy linkLink copied to clipboard!
AnsibleEE jobs have an associated condition message that indicates the current state of the service job. This condition message is displayed in the MESSAGE
field of the oc get job <job_name>
command output. Jobs return one of the following conditions when queried:
-
Job not started
: The job has not started. -
Job not found
: The job could not be found. -
Job is running
: The job is currently running. -
Job complete
: The job execution is complete. -
Job error occurred <error_message>
: The job stopped executing unexpectedly. The<error_message>
is replaced with a specific error message.
To further investigate a service that is displaying a particular job condition message, view its logs by using the command oc logs job/<service>
. For example, to view the logs for the repo-setup-openstack-edpm
service, use the command oc logs job/repo-setup-openstack-edpm
.
10.12.2. Checking the logs for a node set Copy linkLink copied to clipboard!
You can access the logs for a node set to check for deployment issues.
Procedure
Retrieve pods with the
OpenStackAnsibleEE
label:oc get pods -l app=openstackansibleee
$ oc get pods -l app=openstackansibleee configure-network-edpm-compute-j6r4l 0/1 Completed 0 3m36s validate-network-edpm-compute-6g7n9 0/1 Pending 0 0s validate-network-edpm-compute-6g7n9 0/1 ContainerCreating 0 11s validate-network-edpm-compute-6g7n9 1/1 Running 0 13s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow SSH into the pod you want to check:
Pod that is running:
oc rsh validate-network-edpm-compute-6g7n9
$ oc rsh validate-network-edpm-compute-6g7n9
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Pod that is not running:
oc debug configure-network-edpm-compute-j6r4l
$ oc debug configure-network-edpm-compute-j6r4l
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
List the directories in the
/runner/artifacts
mount:ls /runner/artifacts
$ ls /runner/artifacts configure-network-edpm-compute validate-network-edpm-compute
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the
stdout
for the required artifact:cat /runner/artifacts/configure-network-edpm-compute/stdout
$ cat /runner/artifacts/configure-network-edpm-compute/stdout
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 11. Accessing the RHOSO cloud Copy linkLink copied to clipboard!
You can access your Red Hat OpenStack Services on OpenShift (RHOSO) cloud to perform actions on your data plane by either accessing the OpenStackClient
pod through a remote shell from your workstation, or by using a browser to access the Dashboard service (horizon) interface.
11.1. Accessing the OpenStackClient pod Copy linkLink copied to clipboard!
You can execute Red Hat OpenStack Services on OpenShift (RHOSO) commands on the deployed data plane by using the OpenStackClient
pod through a remote shell from your workstation. The OpenStack Operator created the OpenStackClient
pod as a part of the OpenStackControlPlane
resource. The OpenStackClient
pod contains the client tools and authentication details that you require to perform actions on your data plane.
Prerequisites
-
You are logged on to a workstation that has access to the Red Hat OpenShift Container Platform (RHOCP) cluster as a user with
cluster-admin
privileges.
Procedure
Access the remote shell for the
OpenStackClient
pod:oc rsh -n openstack openstackclient
$ oc rsh -n openstack openstackclient
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run your
openstack
commands. For example, you can create adefault
network with the following command:openstack network create default
$ openstack network create default
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Exit the
OpenStackClient
pod:exit
$ exit
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
11.2. Accessing the Dashboard service (horizon) interface Copy linkLink copied to clipboard!
You can access the OpenStack Dashboard service (horizon) interface by providing the Dashboard service endpoint URL in a browser.
Prerequisites
- The Dashboard service is enabled on the control plane. For information about how to enable the Dashboard service, see Enabling the Dashboard service (horizon) interface in Customizing the Red Hat OpenStack Services on OpenShift deployment.
- You need to log into the Dashboard as the admin user.
Procedure
Retrieve the admin password from the
AdminPassword
parameter in theosp-secret
secret:oc get secret osp-secret -o jsonpath='{.data.AdminPassword}' | base64 -d
$ oc get secret osp-secret -o jsonpath='{.data.AdminPassword}' | base64 -d
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the Dashboard service endpoint URL:
oc get horizons horizon -o jsonpath='{.status.endpoint}'
$ oc get horizons horizon -o jsonpath='{.status.endpoint}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Open a browser.
- Enter the Dashboard endpoint URL.
-
Log in to the Dashboard by providing the username of
admin
and the admin password.
Chapter 12. Tuning NFV in a Red Hat OpenStack Services on OpenShift environment Copy linkLink copied to clipboard!
This section contains information about how to tune your {rhoso_long} NFV environment.
12.1. Managing port security in NFV environments Copy linkLink copied to clipboard!
Port security is an anti-spoofing measure that blocks any egress traffic that does not match the source IP and source MAC address of the originating network port. You cannot view or modify this behavior using security group rules.
By default, the port_security_enabled
parameter is set to enabled
on newly created Networking service (neutron) networks in Red Hat OpenStack Services on OpenShift (RHOSO) environments. Newly created ports copy the value of the port_security_enabled
parameter from the network they are created on.
For some NFV use cases, such as building a firewall or router, you must disable port security.
Prerequisites
-
You have the
oc
command line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-admin
privileges.
Procedure
Access the remote shell for the OpenStackClient pod from your workstation:
oc rsh -n openstack openstackclient
$ oc rsh -n openstack openstackclient
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To disable port security on a single port, run the following command:
openstack port set --disable-port-security <port-id>
$ openstack port set --disable-port-security <port-id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To prevent port security from being enabled on any newly created port on a network, run the following command:
openstack network set --disable-port-security <network-id>
$ openstack network set --disable-port-security <network-id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Exit the
openstackclient
pod:exit
$ exit
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.2. Creating and using VF ports Copy linkLink copied to clipboard!
By running various OpenStack CLI client commands, you can create and use virtual function (VF) ports.
Prerequisites
-
You have the
oc
command line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-admin
privileges.
Procedure
Access the remote shell for the OpenStackClient pod from your workstation:
oc rsh -n openstack openstackclient
$ oc rsh -n openstack openstackclient
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a network of type
vlan
.openstack network create trusted_vf_network --provider-network-type vlan \ --provider-segment 111 --provider-physical-network sriov2 \ --external --disable-port-security
$ openstack network create trusted_vf_network --provider-network-type vlan \ --provider-segment 111 --provider-physical-network sriov2 \ --external --disable-port-security
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a subnet.
openstack subnet create --network trusted_vf_network \ --ip-version 4 --subnet-range 192.168.111.0/24 --no-dhcp \ subnet-trusted_vf_network
$ openstack subnet create --network trusted_vf_network \ --ip-version 4 --subnet-range 192.168.111.0/24 --no-dhcp \ subnet-trusted_vf_network
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a port.
Set the
vnic-type
option todirect
, and thebinding-profile
option totrue
.openstack port create --network sriov111 \ --vnic-type direct --binding-profile trusted=true \ sriov111_port_trusted
$ openstack port create --network sriov111 \ --vnic-type direct --binding-profile trusted=true \ sriov111_port_trusted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an instance, and bind it to the previously-created trusted port.
openstack server create --image rhel --flavor dpdk --network internal --port trusted_vf_network_port_trusted --config-drive True --wait rhel-dpdk-sriov_trusted
$ openstack server create --image rhel --flavor dpdk --network internal --port trusted_vf_network_port_trusted --config-drive True --wait rhel-dpdk-sriov_trusted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Exit the
openstackclient
pod:exit
$ exit
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Confirm the trusted VF configuration on the hypervisor by performing the following steps:
On the compute node that you created the instance, enter the following command:
ip link
$ ip link
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Sample output
7: p5p2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether b4:96:91:1c:40:fa brd ff:ff:ff:ff:ff:ff vf 6 MAC fa:16:3e:b8:91:c2, vlan 111, spoof checking off, link-state auto, trust on, query_rss off vf 7 MAC fa:16:3e:84:cf:c8, vlan 111, spoof checking off, link-state auto, trust off, query_rss off
7: p5p2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether b4:96:91:1c:40:fa brd ff:ff:ff:ff:ff:ff vf 6 MAC fa:16:3e:b8:91:c2, vlan 111, spoof checking off, link-state auto, trust on, query_rss off vf 7 MAC fa:16:3e:84:cf:c8, vlan 111, spoof checking off, link-state auto, trust off, query_rss off
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Verify that the trust status of the VF is
trust on
. The example output contains details of an environment that contains two ports. Note thatvf 6
contains the texttrust on
. -
You can disable spoof checking if you set
port_security_enabled: false
in the Networking service (neutron) network, or if you include the argument--disable-port-security
when you run theopenstack port create
command.
12.3. Known limitations for NUMA-aware vSwitches Copy linkLink copied to clipboard!
This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.
This section lists the constraints for implementing a NUMA-aware vSwitch in a Red Hat OpenStack Services on OpenShift (RHOSO) network functions virtualization infrastructure (NFVi).
- You cannot start a VM that has two NICs connected to physnets on different NUMA nodes, if you did not specify a two-node guest NUMA topology.
- You cannot start a VM that has one NIC connected to a physnet and another NIC connected to a tunneled network on different NUMA nodes, if you did not specify a two-node guest NUMA topology.
- You cannot start a VM that has one vhost port and one VF on different NUMA nodes, if you did not specify a two-node guest NUMA topology.
- NUMA-aware vSwitch parameters are specific to overcloud roles. For example, Compute node 1 and Compute node 2 can have different NUMA topologies.
- If the interfaces of a VM have NUMA affinity, ensure that the affinity is for a single NUMA node only. You can locate any interface without NUMA affinity on any NUMA node.
- Configure NUMA affinity for data plane networks, not management networks.
- NUMA affinity for tunneled networks is a global setting that applies to all VMs.
12.4. Quality of Service (QoS) in NFVi environments Copy linkLink copied to clipboard!
You can offer varying service levels for VM instances by using quality of service (QoS) policies to apply rate limits to egress and ingress traffic on Red Hat OpenStack Services on OpenShift (RHOSO) networks in a network functions virtualization infrastructure (NFVi).
In NFVi environments, QoS support is limited to the following rule types:
-
minimum bandwidth
on SR-IOV, if supported by vendor. -
bandwidth limit
on SR-IOV and OVS-DPDK egress interfaces.
12.5. Creating an HCI data plane that uses DPDK Copy linkLink copied to clipboard!
You can deploy your NFV infrastructure with hyperconverged nodes, by co-locating and configuring Compute and Ceph Storage services for optimized resource usage.
For more information about hyperconverged infrastructure (HCI), see Deploying a hyperconverged infrastructure environment.
12.5.1. Example NUMA node configuration Copy linkLink copied to clipboard!
For increased performance, place the tenant network and Ceph object service daemon (OSD)s in one NUMA node, such as NUMA-0, and the VNF and any non-NFV VMs in another NUMA node, such as NUMA-1.
NUMA-0 | NUMA-1 |
---|---|
Number of Ceph OSDs * 4 HT | Guest vCPU for the VNF and non-NFV VMs |
DPDK lcore - 2 HT | DPDK lcore - 2 HT |
DPDK PMD - 2 HT | DPDK PMD - 2 HT |
NUMA-0 | NUMA-1 | |
---|---|---|
Ceph OSD | 32,34,36,38,40,42,76,78,80,82,84,86 | |
DPDK-lcore | 0,44 | 1,45 |
DPDK-pmd | 2,46 | 3,47 |
nova | 5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87 |
12.5.2. Recommended configuration for HCI-DPDK deployments Copy linkLink copied to clipboard!
The following table lists the parameters that you can tune for HCI deployments:
Block Device Type | OSDs, Memory, vCPUs per device |
---|---|
NVMe | Memory : 5GB per OSD OSDs per device: 4 vCPUs per device: 3 |
SSD | Memory : 5GB per OSD OSDs per device: 1 vCPUs per device: 4 |
HDD | Memory : 5GB per OSD OSDs per device: 1 vCPUs per device: 1 |
Use the same NUMA node for the following functions:
- Disk controller
- Storage networks
- Storage CPU and memory
Allocate another NUMA node for the following functions of the DPDK provider network:
- NIC
- PMD CPUs
- Socket memory