Questo contenuto non è disponibile nella lingua selezionata.
Deploying a network functions virtualization environment
Planning, installing, and configuring network functions virtualization (NFV) in Red Hat OpenStack Services on OpenShift
Abstract
Providing feedback on Red Hat documentation Copia collegamentoCollegamento copiato negli appunti!
We appreciate your feedback. Tell us how we can improve the documentation.
To provide documentation feedback for Red Hat OpenStack Services on OpenShift (RHOSO), create a Jira issue in the OSPRH Jira project.
Procedure
- Log in to the Red Hat Atlassian Jira.
- Click the following link to open a Create Issue page: Create issue
- Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue.
- Click Create.
- Review the details of the bug you created.
Chapter 1. Understanding Red Hat Network Functions Virtualization (NFV) Copia collegamentoCollegamento copiato negli appunti!
Network functions virtualization (NFV) is a software-based solution that helps communication service providers (CSPs) to move beyond the traditional, proprietary hardware to achieve greater efficiency and agility and to reduce operational costs.
Using NFV in a Red Hat OpenStack Services on OpenShift (RHOSO) environment allows for IT and network convergence by providing a virtualized infrastructure that uses the standard virtualization technologies to virtualize network functions (VNFs) that run on hardware devices such as switches, routers, and storage.
1.1. Advantages of NFV Copia collegamentoCollegamento copiato negli appunti!
The main advantages of implementing network functions virtualization (NFV) in a Red Hat OpenStack Services on OpenShift (RHOSO) environment are:
- Accelerates the time-to-market by enabling you to quickly deploy and scale new networking services to address changing demands.
- Supports innovation by enabling service developers to self-manage their resources and prototype using the same platform that will be used in production.
- Addresses customer demands in hours or minutes instead of weeks or days, without sacrificing security or performance.
- Reduces capital expenditure because it uses commodity-off-the-shelf hardware instead of expensive tailor-made equipment.
- Uses streamlined operations and automation that optimize day-to-day tasks to improve employee productivity and reduce operational costs.
1.2. Supported Configurations for NFV Deployments Copia collegamentoCollegamento copiato negli appunti!
Red Hat supports network functions virtualization (NFV) on Red Hat OpenStack Services on OpenShift (RHOSO) environments using Data Plane Development Kit (DPDK) and Single Root I/O Virtualization (SR-IOV).
Other configurations include:
- Open vSwitch (OVS) with LACP
- Hyper-converged infrastructure (HCI)
1.3. NFV data plane connectivity Copia collegamentoCollegamento copiato negli appunti!
With the introduction of network functions virtualization (NFV), more networking vendors are starting to implement their traditional devices as VNFs. While the majority of networking vendors are considering virtual machines, some are also investigating a container-based approach as a design choice. A Red Hat OpenStack Services on OpenShift (RHOSO) environment should be rich and flexible because of two primary reasons:
- Application readiness - Network vendors are currently in the process of transforming their devices into VNFs. Different VNFs in the market have different maturity levels; common barriers to this readiness include enabling RESTful interfaces in their APIs, evolving their data models to become stateless, and providing automated management operations. OpenStack should provide a common platform for all.
Broad use-cases - NFV includes a broad range of applications that serve different use-cases. For example, Virtual Customer Premise Equipment (vCPE) aims at providing a number of network functions such as routing, firewall, virtual private network (VPN), and network address translation (NAT) at customer premises. Virtual Evolved Packet Core (vEPC), is a cloud architecture that provides a cost-effective platform for the core components of Long-Term Evolution (LTE) network, allowing dynamic provisioning of gateways and mobile endpoints to sustain the increased volumes of data traffic from smartphones and other devices.
These use cases are implemented using different network applications and protocols, and require different connectivity, isolation, and performance characteristics from the infrastructure. It is also common to separate between control plane interfaces and protocols and the actual forwarding plane. OpenStack must be flexible enough to offer different datapath connectivity options.
In principle, there are two common approaches for providing data plane connectivity to virtual machines:
- Direct hardware access bypasses the linux kernel and provides secure direct memory access (DMA) to the physical NIC using technologies such as PCI Passthrough or single root I/O virtualization (SR-IOV) for both Virtual Function (VF) and Physical Function (PF) pass-through.
- Using a virtual switch (vswitch), implemented as a software service of the hypervisor. Virtual machines are connected to the vSwitch using virtual interfaces (vNICs), and the vSwitch is capable of forwarding traffic between virtual machines, as well as between virtual machines and the physical network.
Some of the fast data path options are as follows:
- Single Root I/O Virtualization (SR-IOV) is a standard that makes a single PCI hardware device appear as multiple virtual PCI devices. It works by introducing Physical Functions (PFs), which are the fully featured PCIe functions that represent the physical hardware ports, and Virtual Functions (VFs), which are lightweight functions that are assigned to the virtual machines. To the VM, the VF resembles a regular NIC that communicates directly with the hardware. NICs support multiple VFs.
- Open vSwitch (OVS) is an open source software switch that is designed to be used as a virtual switch within a virtualized server environment. OVS supports the capabilities of a regular L2-L3 switch and also offers support to the SDN protocols such as OpenFlow to create user-defined overlay networks (for example, VXLAN). OVS uses Linux kernel networking to switch packets between virtual machines and across hosts using physical NIC. OVS now supports connection tracking (Conntrack) with built-in firewall capability to avoid the overhead of Linux bridges that use iptables/ebtables. Open vSwitch for Red Hat OpenStack Platform environments offers default OpenStack Networking (neutron) integration with OVS.
- Data Plane Development Kit (DPDK) consists of a set of libraries and poll mode drivers (PMD) for fast packet processing. It is designed to run mostly in the user-space, enabling applications to perform their own packet processing directly from or to the NIC. DPDK reduces latency and allows more packets to be processed. DPDK Poll Mode Drivers (PMDs) run in busy loop, constantly scanning the NIC ports on the host and vNIC ports in the guest for arrival of packets.
- DPDK accelerated Open vSwitch (OVS-DPDK) is Open vSwitch bundled with DPDK for a high performance user-space solution with Linux kernel bypass and direct memory access (DMA) to physical NICs. The idea is to replace the standard OVS kernel data path with a DPDK-based data path, creating a user-space vSwitch on the host that uses DPDK internally for its packet forwarding. The advantage of this architecture is that it is mostly transparent to users. The interfaces it exposes, such as OpenFlow, OVSDB, the command line, remain mostly the same.
1.4. ETSI NFV architecture Copia collegamentoCollegamento copiato negli appunti!
The European Telecommunications Standards Institute (ETSI) is an independent standardization group that develops standards for information and communications technologies (ICT) in Europe.
Network functions virtualization (NFV) focuses on addressing problems involved in using proprietary hardware devices. With NFV, the necessity to install network-specific equipment is reduced, depending upon the use case requirements and economic benefits. The ETSI Industry Specification Group for Network Functions Virtualization (ETSI ISG NFV) sets the requirements, reference architecture, and the infrastructure specifications necessary to ensure virtualized functions are supported.
Red Hat is offering an open-source based cloud-optimized solution to help the Communication Service Providers (CSP) to achieve IT and network convergence. Red Hat adds NFV features such as single root I/O virtualization (SR-IOV) and Open vSwitch with Data Plane Development Kit (OVS-DPDK) to Red Hat OpenStack Services on OpenShift (RHOSO) environments.
1.5. NFV ETSI architecture and components Copia collegamentoCollegamento copiato negli appunti!
In general, a network functions virtualization (NFV) on Red Hat OpenStack Services on OpenShift (RHOSO) environments has the following components:
Figure 1.1. NFV ETSI architecture and components
- Virtualized Network Functions (VNFs) - the software implementation of routers, firewalls, load balancers, broadband gateways, mobile packet processors, servicing nodes, signalling, location services, and other network functions.
- NFV Infrastructure (NFVi) - the physical resources (compute, storage, network) and the virtualization layer that make up the infrastructure. The network includes the datapath for forwarding packets between virtual machines and across hosts. This allows you to install VNFs without being concerned about the details of the underlying hardware. NFVi forms the foundation of the NFV stack. NFVi supports multi-tenancy and is managed by the Virtual Infrastructure Manager (VIM). Enhanced Platform Awareness (EPA) improves the virtual machine packet forwarding performance (throughput, latency, jitter) by exposing low-level CPU and NIC acceleration components to the VNF.
- NFV Management and Orchestration (MANO) - the management and orchestration layer focuses on all the service management tasks required throughout the life cycle of the VNF. The main goals of MANO is to allow service definition, automation, error-correlation, monitoring, and life-cycle management of the network functions offered by the operator to its customers, decoupled from the physical infrastructure. This decoupling requires additional layers of management, provided by the Virtual Network Function Manager (VNFM). VNFM manages the life cycle of the virtual machines and VNFs by either interacting directly with them or through the Element Management System (EMS) provided by the VNF vendor. The other important component defined by MANO is the Orchestrator, also known as NFVO. NFVO interfaces with various databases and systems including Operations/Business Support Systems (OSS/BSS) on the top and the VNFM on the bottom. If the NFVO wants to create a new service for a customer, it asks the VNFM to trigger the instantiation of a VNF, which may result in multiple virtual machines.
- Operations and Business Support Systems (OSS/BSS) - provides the essential business function applications, for example, operations support and billing. The OSS/BSS needs to be adapted to NFV, integrating with both legacy systems and the new MANO components. The BSS systems set policies based on service subscriptions and manage reporting and billing.
- Systems Administration, Automation and Life-Cycle Management - manages system administration, automation of the infrastructure components and life cycle of the NFVi platform.
1.6. Red Hat NFV components Copia collegamentoCollegamento copiato negli appunti!
Red Hat’s solution for network functions virtualization (NFV) includes a range of products that can act as the different components of the NFV framework in the ETSI model. The following products from the Red Hat portfolio integrate into an NFV solution:
- Red Hat OpenStack Services on OpenShift (RHOSO) - Supports IT and NFV workloads. The Enhanced Platform Awareness (EPA) features deliver deterministic performance improvements through CPU pinning, huge pages, Non-Uniform Memory Access (NUMA) affinity, and network adaptors (NICs) that support SR-IOV and OVS-DPDK.
- Red Hat Enterprise Linux and Red Hat Enterprise Linux Atomic Host - Create virtual machines and containers as VNFs.
- Red Hat Ceph Storage - Provides the unified elastic and high-performance storage layer for all the needs of the service provider workloads.
- Red Hat JBoss Middleware and OpenShift Enterprise by Red Hat - Optionally provide the ability to modernize the OSS/BSS components.
- Red Hat CloudForms - Provides a VNF manager and presents data from multiple sources, such as the VIM and the NFVi in a unified display.
- Red Hat Satellite and Ansible by Red Hat - Optionally provide enhanced systems administration, automation and life-cycle management.
Chapter 2. NFV performance considerations Copia collegamentoCollegamento copiato negli appunti!
The Red Hat OpenStack Services on OpenShift (RHOSO) network functions virtualization (NFV) solution exploits the Kernel-based Virtual Machine (KVM) hypervisor to match or exceed the performance of physical implementations, especially with regard to throughput, latency, and jitter.
You configure RHOSO Compute nodes to enforce resource partitioning and fine tuning to achieve line rate performance for the guest virtual network functions (VNFs).
You can enable high-performance packet switching between physical NICs and virtual machines using data plane development kit (DPDK) accelerated virtual machines. Open vSwitch (OVS) embeds support for Data Plane Development Kit (DPDK) and includes support for vhost-user multiqueue, allowing scalable performance. OVS-DPDK provides line-rate performance for guest VNFs.
Single root I/O virtualization (SR-IOV) networking provides enhanced performance, including improved throughput for specific networks and virtual machines.
Other important features for performance tuning include huge pages, NUMA alignment, host isolation, and CPU pinning. VNF flavors require huge pages and emulator thread isolation for better performance. Host isolation and CPU pinning improve NFV performance and prevent spurious packet loss.
2.1. CPUs and NUMA nodes Copia collegamentoCollegamento copiato negli appunti!
Previously, all memory on x86 systems was equally accessible to all CPUs in the system. This resulted in memory access times that were the same regardless of which CPU in the system was performing the operation and was referred to as Uniform Memory Access (UMA).
In Non-Uniform Memory Access (NUMA), system memory is divided into zones called nodes, which are allocated to particular CPUs or sockets. Access to memory that is local to a CPU is faster than memory connected to remote CPUs on that system. Normally, each socket on a NUMA system has a local memory node whose contents can be accessed faster than the memory in the node local to another CPU or the memory on a bus shared by all CPUs.
Similarly, physical NICs are placed in PCI slots on the Compute node hardware. These slots connect to specific CPU sockets that are associated with a particular NUMA node. For optimum performance, connect your datapath NICs to the same NUMA nodes in your CPU configuration (SR-IOV or OVS-DPDK).
The performance impact of NUMA misses are significant, generally starting at a 10% performance hit or higher. Each CPU socket can have multiple CPU cores which are treated as individual CPUs for virtualization purposes.
For more information about NUMA, see What is NUMA and how does it work on Linux?
2.1.1. NUMA node example Copia collegamentoCollegamento copiato negli appunti!
The following diagram provides an example of a two-node NUMA system and the way the CPU cores and memory pages are made available:
Figure 2.1. Example: two-node NUMA system
Remote memory available via Interconnect is accessed only if VM1 from NUMA node 0 has a CPU core in NUMA node 1. In this case, the memory of NUMA node 1 acts as local for the third CPU core of VM1 (for example, if VM1 is allocated with CPU 4 in the diagram above), but at the same time, it acts as remote memory for the other CPU cores of the same VM.
2.1.2. NUMA aware instances Copia collegamentoCollegamento copiato negli appunti!
You can configure an OpenStack environment to use NUMA topology awareness on systems with a NUMA architecture. When running a guest operating system in a virtual machine (VM) there are two NUMA topologies involved:
- NUMA topology of the physical hardware of the host
- NUMA topology of the virtual hardware exposed to the guest operating system
You can optimize the performance of guest operating systems by aligning the virtual hardware with the physical hardware NUMA topology.
2.2. CPU pinning Copia collegamentoCollegamento copiato negli appunti!
CPU pinning is the ability to run a specific virtual machine’s virtual CPU on a specific physical CPU, in a given host. vCPU pinning provides similar advantages to task pinning on bare-metal systems. Since virtual machines run as user space tasks on the host operating system, pinning increases cache efficiency.
2.3. Huge pages Copia collegamentoCollegamento copiato negli appunti!
Physical memory is segmented into contiguous regions called pages. For efficiency, the system retrieves memory by accessing entire pages instead of individual bytes of memory. To perform this translation, the system looks in the Translation Lookaside Buffers (TLB) that contain the physical to virtual address mappings for the most recently or frequently used pages. When the system cannot find a mapping in the TLB, the processor must iterate through all of the page tables to determine the address mappings. Optimize the TLB to minimize the performance penalty that occurs during these TLB misses.
The typical page size in an x86 system is 4KB, with other larger page sizes available. Larger page sizes mean that there are fewer pages overall, and therefore increases the amount of system memory that can have its virtual to physical address translation stored in the TLB. Consequently, this reduces TLB misses, which increases performance. With larger page sizes, there is an increased potential for memory to be under-utilized as processes must allocate in pages, but not all of the memory is likely required. As a result, choosing a page size is a compromise between providing faster access times with larger pages, and ensuring maximum memory utilization with smaller pages.
Chapter 3. Requirements for NFV Copia collegamentoCollegamento copiato negli appunti!
Before you deploy your network functions virtualization (NFV) in a Red Hat OpenStack Services on OpenShift (RHOSO) environment, become familiar with the hardware and software requirements.
Red Hat certifies hardware for use with RHOSO. For more information, see Certified hardware.
3.1. Tested NICs for NFV Copia collegamentoCollegamento copiato negli appunti!
For a list of tested NICs for NFV, see the Red Hat Knowledgebase solution Network Adapter Fast Datapath Feature Support Matrix.
Use the default driver for the supported NIC, unless you are configuring NVIDIA (Mellanox) network interfaces. For NVIDIA network interfaces, you must specify the kernel driver during configuration.
- Example
- In this example, an OVS-DPDK port is being configured. Because the NIC being used is an NVIDIA ConnectX-5, the driver must be specified:
members
- type: ovs_dpdk_port
name: dpdk0
driver: mlx5_core
members:
- type: interface
name: enp3s0f0
3.2. Discovering your NUMA node topology Copia collegamentoCollegamento copiato negli appunti!
For network functions virtualization (NFV) on Red Hat OpenStack Services on OpenShift (RHOSO) environments, you must understand the NUMA topology of your Compute node to partition the CPU and memory resources for optimum performance. To determine the NUMA information, perform one of the following tasks:
Procedure
- Enable hardware introspection to retrieve this information from bare-metal nodes.
- Log on to each bare-metal node to manually collect the information.
3.3. NFV BIOS settings Copia collegamentoCollegamento copiato negli appunti!
The following table describes the required BIOS settings for network functions virtualization (NFV) on Red Hat OpenStack Services on OpenShift (RHOSO) environments:
You must enable SR-IOV global and NIC settings in the BIOS, or your RHOSO deployment with SR-IOV Compute nodes will fail.
| Parameter | Setting |
|---|---|
|
| Disabled. |
|
| Disabled. |
|
| Enabled. |
|
| Enabled. |
|
| Enabled. |
|
| Enabled. |
|
| Performance. |
|
| Enabled. |
|
| Disabled in NFV deployments that require deterministic performance. Enabled in all other scenarios. |
|
| Enabled for Intel cards if VFIO functionality is needed. |
|
| Disabled. |
If you are using the TuneD CPU-partitioning power saving profile, cpu-partitioning-powersave, then you must enable the appropriate C-states power state level in the BIOS. For more information about BIOS settings, contact your hardware vendor.
On processors that use the intel_idle driver, Red Hat Enterprise Linux can ignore BIOS settings and re-enable the processor C-state.
You can disable intel_idle and instead use the acpi_idle driver by specifying the key-value pair intel_idle.max_cstate=0 on the kernel boot command line.
Confirm that the processor is using the acpi_idle driver by checking the contents of current_driver:
$ cat /sys/devices/system/cpu/cpuidle/current_driver
- Sample output
acpi_idle
You will experience some latency after changing drivers, because it takes time for the Tuned daemon to start. However, after Tuned loads, the processor does not use the deeper C-state.
3.4. Supported drivers for NFV Copia collegamentoCollegamento copiato negli appunti!
For a complete list of supported drivers for network functions virtualization (NFV) on Red Hat OpenStack Services on OpenShift (RHOSO) environments, see Component, Plug-In, and Driver Support in Red Hat OpenStack Platform .
For a list of NICs tested for NFV on RHOSO environments, see Tested NICs for NFV.
Chapter 4. Planning an SR-IOV deployment Copia collegamentoCollegamento copiato negli appunti!
You can optimize your single root I/O virtualization (SR-IOV) deployment for NFV in Red Hat OpenStack Services on OpenShift (RHOSO) environments by choosing appropriate values for configuration parameters. Inform your choices by understanding how SR-IOV uses the Compute node hardware (CPU, NUMA nodes, memory, NICs).
To evaluate your hardware impact on the SR-IOV parameters, see Discovering your NUMA node topology.
4.1. NIC partitioning for an SR-IOV deployment Copia collegamentoCollegamento copiato negli appunti!
You can reduce the number of NICs that you need for each host by configuring single root I/O virtualization (SR-IOV) virtual functions (VFs) for Red Hat OpenStack Services on OpenShift (RHOSO) management networks and provider networks. When you partition a single, high-speed NIC into multiple VFs, you can use the NIC for both control and data plane traffic. This feature has been validated on Intel Fortville NICs, and Mellanox CX-5 NICs.
To partition your NICs, you must adhere to the following requirements:
The NICs, their applications, the VF guest, and OVS reside on the same NUMA Compute node.
Doing so helps to prevent performance degradation from cross-NUMA operations.
Ensure that the NIC firmware is up-to-date.
Yumordnfupdates might not complete the firmware update. For more information, see your vendor documentation.
Some of the initial configuration settings used during bare metal provisioning are not automatically removed after provisioning. In certain cases, this can lead to duplicate assignment of the IP address used for the provisioning control plane interface as specified in spec.bareMetalSetTemplate.ctlplaneInterface in your OpenStackDataPlaneNodeSet CR. If your os-net-config data plane deployment from unprovisioned nodes uses the same NIC as something other than a control plane interface, and partitions that NIC, your deployment may be disrupted by IP address conflicts.
To prevent these conflicts, use remove_config as described in Cleaning up obsolete host network configurations.
The conflicts do not happen if os-net-config uses the NIC as a control plane interface in the deployed data plane, or if it does not partition the NIC. In these cases you do not need to use remove-config.
4.2. Hardware partitioning for an SR-IOV deployment Copia collegamentoCollegamento copiato negli appunti!
To achieve high performance with SR-IOV, partition the resources between the host and the guest.
Figure 4.1. NUMA node topology
A typical topology includes 14 cores per NUMA node on dual socket Compute nodes. Both hyper-threading (HT) and non-HT cores are supported. Each core has two sibling threads. One core is dedicated to the host on each NUMA node. The virtual network function (VNF) handles the SR-IOV interface bonding. All the interrupt requests (IRQs) are routed on the host cores. The VNF cores are dedicated to the VNFs. They provide isolation from other VNFs and isolation from the host. Each VNF must use resources on a single NUMA node. The SR-IOV NICs used by the VNF must also be associated with that same NUMA node. This topology does not have a virtualization overhead. The host, OpenStack Networking (neutron), and Compute (nova) configuration parameters are exposed in a single file for ease, consistency, and to avoid incoherence that is fatal to proper isolation, causing preemption, and packet loss. The host and virtual machine isolation depend on a tuned profile, which defines the boot parameters and any Red Hat OpenStack Platform modifications based on the list of isolated CPUs.
4.3. Topology of an NFV SR-IOV deployment Copia collegamentoCollegamento copiato negli appunti!
The following image has two VNFs each with the management interface represented by mgt and the data plane interfaces. The management interface manages the ssh access, and so on. The data plane interfaces bond the VNFs to DPDK to ensure high availability, as VNFs bond the data plane interfaces using the DPDK library. The image also has two provider networks for redundancy. The Compute node has two regular NICs bonded together and shared between the VNF management and the Red Hat OpenStack Platform API management.
Figure 4.2. NFV SR-IOV topology
The image shows a VNF that uses DPDK at an application level, and has access to SR-IOV virtual functions (VFs) and physical functions (PFs), for better availability or performance, depending on the fabric configuration. DPDK improves performance, while the VF/PF DPDK bonds provide support for failover, and high availability. The VNF vendor must ensure that the DPDK poll mode driver (PMD) supports the SR-IOV card that is being exposed as a VF/PF. The management network uses OVS, therefore the VNF sees a mgmt network device using the standard virtIO drivers. You can use that device to initially connect to the VNF, and ensure that the DPDK application bonds the two VF/PFs.
4.4. Topology for NFV SR-IOV without HCI Copia collegamentoCollegamento copiato negli appunti!
Observe the topology for SR-IOV without hyper-converged infrastructure (HCI) for NFV in the image below. It consists of compute and controller nodes with 1 Gbps NICs, and the RHOSO worker node.
Figure 4.3. NFV SR-IOV topology without HCI
Chapter 5. Planning an OVS-DPDK deployment Copia collegamentoCollegamento copiato negli appunti!
You can optimize your Open vSwitch with Data Plane Development Kit (OVS-DPDK) deployment for NFV in Red Hat OpenStack Services on OpenShift (RHOSO) environments by choosing appropriate values for configuration parameters. Inform your choices by understanding how OVS-DPDK uses the Compute node hardware (CPU, NUMA nodes, memory, NICs).
When using OVS-DPDK and the OVS native firewall (a stateful firewall based on conntrack), you can track only packets that use ICMPv4, ICMPv6, TCP, and UDP protocols. OVS marks all other types of network traffic as invalid.
5.1. OVS-DPDK with CPU partitioning and NUMA topology Copia collegamentoCollegamento copiato negli appunti!
OVS-DPDK partitions the hardware resources for host, guests, and itself. The OVS-DPDK Poll Mode Drivers (PMDs) run DPDK active loops, which require dedicated CPU cores. Therefore you must allocate some CPUs, and huge pages, to OVS-DPDK.
A sample partitioning includes 16 cores per NUMA node on dual-socket Compute nodes. The traffic requires additional NICs because you cannot share NICs between the host and OVS-DPDK.
Figure 5.1. NUMA topology: OVS-DPDK with CPU partitioning
You must reserve DPDK PMD threads on both NUMA nodes, even if a NUMA node does not have an associated DPDK NIC.
For optimum OVS-DPDK performance, reserve a block of memory local to the NUMA node. Choose NICs associated with the same NUMA node that you use for memory and CPU pinning. Ensure that both bonded interfaces are from NICs on the same NUMA node.
5.2. OVS-DPDK with TCP segmentation offload Copia collegamentoCollegamento copiato negli appunti!
RHOSO 18.0.10 (Feature Release 3) promotes TCP segmentation offload (TSO) for RHOSO environments with OVS-DPDK from a technology preview to a generally available feature.
The segmentation process happens at the transport layer. It divides data from the upper stack layers into segments to support transport across and within networks at the network and data link layers.
Enable TSO for DPDK only in the initial deployment of a new RHOSO environment. Enabling this feature in a previously deployed system is not supported.
Segmentation processing can happen on the host, where it consumes CPU resources. With TSO, segmentation is offloaded to NICs, to free up host resources and improve performance.
TSO for DPDK can be useful if your workload includes large frames that require TCP segmentation in the user space or kernel.
Additional resources
5.3. Enabling OVS-DPDK with TCP segmentation offload Copia collegamentoCollegamento copiato negli appunti!
You can configure your Red Hat OpenStack Services on OpenShift (RHOSO) OVS-DPDK environment to offload TCP segmentation to NICs (TSO).
Enable TSO for DPDK only in the initial deployment of a new RHOSO environment. Enabling this feature in a previously deployed system is not supported.
Prerequisites
- A functional control plane, created with the OpenStack Operator. For more information, see Creating the control plane.
-
You are logged on to a workstation that has access to the Red Hat OpenShift Container Platform (RHOCP) cluster as a user with
cluster-adminprivileges.
Procedure
When you follow the instructions in Creating a set of data plane nodes with pre-provisioned nodes or Creating a set of data plane nodes with unprovisioned nodes, include the
edpm_ovs_dpdk_enable_tso: truevalue pair in theOpenStackDataPlaneNodeSetmanifest. For example:nodeTemplate: ansible: ansibleUser: cloud-admin ansiblePort: 22 ansibleVarsFrom: - prefix: subscription_manager_ secretRef: name: subscription-manager - secretRef: name: redhat-registry ansibleVars: edpm_ovs_dpdk_enable_tso: true edpm_bootstrap_command: | ...- Complete the node set creation procedure.
Verification
After completing the node set procedure, run the following command on the Compute nodes:
ovs-vsctl get Open_vSwitch . other_config:userspace-tso-enable
5.4. Two NUMA node example OVS-DPDK deployment Copia collegamentoCollegamento copiato negli appunti!
The Red Hat OpenStack Services on OpenShift (RHOSO) Compute node in the following example includes two NUMA nodes:
- NUMA 0 has logical cores 0-7 (four physical cores). The sibling thread pairs are (0,1), (2,3), (4,5), and (6,7)
- NUMA 1 has cores 8-15. The sibling thread pairs are (8,9), (10,11), (12,13), and (14,15).
- Each NUMA node connects to a physical NIC, namely NIC1 on NUMA 0, and NIC2 on NUMA 1.
Figure 5.2. OVS-DPDK: two NUMA nodes example
Reserve the first physical cores or both thread pairs on each NUMA node (0,1 and 8,9) for non-datapath DPDK processes.
This example also assumes a 1500 MTU configuration, so the OvsDpdkSocketMemory is the same for all use cases:
OvsDpdkSocketMemory: "1024,1024"
- NIC 1 for DPDK, with one physical core for PMD
- In this use case, you allocate one physical core on NUMA 0 for PMD. You must also allocate one physical core on NUMA 1, even though DPDK is not enabled on the NIC for that NUMA node. The remaining cores are allocated for guest instances. The resulting parameter settings are:
edpm_ovs_dpdk_pmd_core_list: "2,3,10,11"
cpu_dedicated_set: "4,5,6,7,12,13,14,15"
- NIC 1 for DPDK, with two physical cores for PMD
- In this use case, you allocate two physical cores on NUMA 0 for PMD. You must also allocate one physical core on NUMA 1, even though DPDK is not enabled on the NIC for that NUMA node. The remaining cores are allocated for guest instances. The resulting parameter settings are:
edpm_ovs_dpdk_pmd_core_list: "2,3,4,5,10,11"
cpu_dedicated_set: "6,7,12,13,14,15"
- NIC 2 for DPDK, with one physical core for PMD
- In this use case, you allocate one physical core on NUMA 1 for PMD. You must also allocate one physical core on NUMA 0, even though DPDK is not enabled on the NIC for that NUMA node. The remaining cores are allocated for guest instances. The resulting parameter settings are:
edpm_ovs_dpdk_pmd_core_list: "2,3,10,11"
cpu_dedicated_set: "4,5,6,7,12,13,14,15"
- NIC 2 for DPDK, with two physical cores for PMD
- In this use case, you allocate two physical cores on NUMA 1 for PMD. You must also allocate one physical core on NUMA 0, even though DPDK is not enabled on the NIC for that NUMA node. The remaining cores are allocated for guest instances. The resulting parameter settings are:
edpm_ovs_dpdk_pmd_core_list: "2,3,10,11,12,13"
cpu_dedicated_set: "4,5,6,7,14,15"
- NIC 1 and NIC2 for DPDK, with two physical cores for PMD
- In this use case, you allocate two physical cores on each NUMA node for PMD. The remaining cores are allocated for guest instances. The resulting parameter settings are:
edpm_ovs_dpdk_pmd_core_list: "2,3,4,5,10,11,12,13"
cpu_dedicated_set: "6,7,14,15"
5.5. Topology of an NFV OVS-DPDK deployment Copia collegamentoCollegamento copiato negli appunti!
This example deployment shows an OVS-DPDK configuration and consists of two virtual network functions (VNFs) with two interfaces each:
-
The management interface, represented by
mgt. - The data plane interface.
In the OVS-DPDK deployment, the VNFs operate with inbuilt DPDK that supports the physical interface. OVS-DPDK enables bonding at the vSwitch level. For improved performance in your OVS-DPDK deployment, separate kernel and OVS-DPDK NICs. To separate the management (mgt) network, connected to the Base provider network for the virtual machine, ensure you have additional NICs. The Compute node consists of two regular NICs for the Red Hat OpenStack Platform API management that can be reused by the Ceph API but cannot be shared with any OpenStack project.
Figure 5.3. Compute node: NFV OVS-DPDK
Figure 5.4. OVS-DPDK Topology for NFV
Chapter 6. Installing and preparing the OpenStack Operator Copia collegamentoCollegamento copiato negli appunti!
You install the Red Hat OpenStack Services on OpenShift (RHOSO) OpenStack Operator (openstack-operator) and create the RHOSO control plane on an operational Red Hat OpenShift Container Platform (RHOCP) cluster. You install the OpenStack Operator by using the RHOCP OperatorHub. You perform the control plane installation tasks and all data plane creation tasks on a workstation that has access to the RHOCP cluster.
For information about mapping RHOSO versions to OpenStack Operators and OpenStackVersion Custom Resources (CRs), see the Red Hat Knowledgebase article How RHOSO versions map to OpenStack Operators and OpenStackVersion CRs.
6.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
An operational RHOCP cluster, version 4.18. For the RHOCP system requirements, see Red Hat OpenShift Container Platform cluster requirements in Planning your deployment.
- For the minimum RHOCP hardware requirements for hosting your RHOSO control plane, see Minimum RHOCP hardware requirements.
- For the minimum RHOCP network requirements, see RHOCP network requirements.
-
For a list of the Operators that must be installed before you install the
openstack-operator, see RHOCP software requirements.
-
The
occommand line tool is installed on your workstation. -
You are logged in to the RHOCP cluster as a user with
cluster-adminprivileges.
6.2. Installing the OpenStack Operator by using the web console Copia collegamentoCollegamento copiato negli appunti!
You can use the Red Hat OpenShift Container Platform (RHOCP) web console to install the OpenStack Operator (openstack-operator) on your RHOCP cluster from the OperatorHub. After you install the Operator, you configure a single instance of the OpenStack Operator initialization resource, OpenStack, to start the OpenStack Operator on your cluster.
Procedure
-
Log in to the RHOCP web console as a user with
cluster-adminpermissions. - Select Operators → OperatorHub.
-
In the Filter by keyword field, type
OpenStack. -
Click the OpenStack Operator tile with the
Red Hatsource label. - Read the information about the Operator and click Install.
- On the Install Operator page, select "Operator recommended Namespace: openstack-operators" from the Installed Namespace list.
- On the Install Operator page, select "Manual" from the Update approval list. For information about how to manually approve a pending Operator update, see Manually approving a pending Operator update in the RHOCP Operators guide.
-
Click Install to make the Operator available to the
openstack-operatorsnamespace. The OpenStack Operator is installed when the Status isSucceeded. - Click Create OpenStack to open the Create OpenStack page.
-
On the Create OpenStack page, click Create to create an instance of the OpenStack Operator initialization resource. The OpenStack Operator is ready to use when the Status of the
openstackinstance isConditions: Ready.
6.3. Installing the OpenStack Operator by using the CLI Copia collegamentoCollegamento copiato negli appunti!
You can use the Red Hat OpenShift Container Platform (RHOCP) CLI (oc) to install the OpenStack Operator (openstack-operator) on your RHOCP cluster from the OperatorHub.
To install the OpenStack Operator by using the CLI, you create the openstack-operators namespace for the Red Hat OpenStack Platform (RHOSP) service Operators. You then create the OperatorGroup and Subscription custom resources (CRs) within the namespace. After you install the Operator, you configure a single instance of the OpenStack Operator initialization resource, OpenStack, to start the OpenStack Operator on your cluster.
Procedure
Create the
openstack-operatorsnamespace for the RHOSP operators:$ cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: openstack-operators spec: finalizers: - kubernetes EOFCreate the
OperatorGroupCR in theopenstack-operatorsnamespace:$ cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openstack namespace: openstack-operators EOFCreate the
SubscriptionCR that subscribes toopenstack-operator:$ cat << EOF| oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openstack-operator namespace: openstack-operators spec: name: openstack-operator channel: stable-v1.0 source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual EOFWait for the install plan to be created:
$ oc get installplan -n openstack-operators -o json | jq -r '.items[] | select(.spec.approval=="Manual" and .spec.approved==false) | .metadata.name' | head -n1Approve the install plan:
$ oc patch installplan <install_plan_name> -n openstack-operators --type merge -p '{"spec":{"approved":true}}'Verify that the OpenStack Operator is installed:
$ oc wait csv -n openstack-operators \ -l operators.coreos.com/openstack-operator.openstack-operators="" \ --for jsonpath='{.status.phase}'=SucceededCreate an instance of the
openstack-operator:$ cat << EOF | oc apply -f - apiVersion: operator.openstack.org/v1beta1 kind: OpenStack metadata: name: openstack namespace: openstack-operators EOFConfirm that the Openstack Operator is deployed:
$ oc wait openstack/openstack -n openstack-operators --for condition=Ready --timeout=500s
Additional resources
Chapter 7. Preparing Red Hat OpenShift Container Platform for Red Hat OpenStack Services on OpenShift Copia collegamentoCollegamento copiato negli appunti!
You install Red Hat OpenStack Services on OpenShift (RHOSO) on an operational Red Hat OpenShift Container Platform (RHOCP) cluster. To prepare for installing and deploying your RHOSO environment, you must configure the RHOCP worker nodes and the RHOCP networks on your RHOCP cluster.
7.1. Configuring Red Hat OpenShift Container Platform nodes for a Red Hat OpenStack Platform deployment Copia collegamentoCollegamento copiato negli appunti!
Red Hat OpenStack Services on OpenShift (RHOSO) services run on Red Hat OpenShift Container Platform (RHOCP) worker nodes. By default, the OpenStack Operator deploys RHOSO services on any worker node. You can use node labels in your OpenStackControlPlane custom resource (CR) to specify which RHOCP nodes host the RHOSO services. By pinning some services to specific infrastructure nodes rather than running the services on all of your RHOCP worker nodes, you optimize the performance of your deployment.
You can create new labels for the RHOCP nodes, or you can use the existing labels, and then specify those labels in the OpenStackControlPlane CR by using the nodeSelector field. For example, the Block Storage service (cinder) has different requirements for each of its services:
-
The
cinder-schedulerservice is a very light service with low memory, disk, network, and CPU usage. -
The
cinder-apiservice has high network usage due to resource listing requests. -
The
cinder-volumeservice has high disk and network usage because many of its operations are in the data path, such as offline volume migration, and creating a volume from an image. -
The
cinder-backupservice has high memory, network, and CPU requirements.
Therefore, you can pin the cinder-api, cinder-volume, and cinder-backup services to dedicated nodes and let the OpenStack Operator place the cinder-scheduler service on a node that has capacity.
Alternatively, you can create Topology CRs and use the topologyRef field in your OpenStackControlPlane CR to control service pod placement after your RHOCP cluster has been prepared. For more information, see Controlling service pod placement with Topology CRs.
7.2. Creating the openstack namespace Copia collegamentoCollegamento copiato negli appunti!
You must create a namespace within your Red Hat OpenShift Container Platform (RHOCP) environment for the service pods of your Red Hat OpenStack Services on OpenShift (RHOSO) deployment. The service pods of each RHOSO deployment exist in their own namespace within the RHOCP environment.
Prerequisites
-
You are logged on to a workstation that has access to the RHOCP cluster, as a user with
cluster-adminprivileges.
Procedure
Create the
openstackproject for the deployed RHOSO environment:$ oc new-project openstackEnsure the
openstacknamespace is labeled to enable privileged pod creation by the OpenStack Operators:$ oc get namespace openstack -ojsonpath='{.metadata.labels}' | jq { "kubernetes.io/metadata.name": "openstack", "pod-security.kubernetes.io/enforce": "privileged", "security.openshift.io/scc.podSecurityLabelSync": "false" }If the security context constraint (SCC) is not "privileged", use the following commands to change it:
$ oc label ns openstack security.openshift.io/scc.podSecurityLabelSync=false --overwrite $ oc label ns openstack pod-security.kubernetes.io/enforce=privileged --overwriteOptional: To remove the need to specify the namespace when executing commands on the
openstacknamespace, set the defaultnamespacetoopenstack:$ oc project openstack
7.3. Providing secure access to the Red Hat OpenStack Services on OpenShift services Copia collegamentoCollegamento copiato negli appunti!
You must create a Secret custom resource (CR) to provide secure access to the Red Hat OpenStack Services on OpenShift (RHOSO) service pods. The following procedure creates a Secret CR with the required password formats for each service.
For an example Secret CR that generates the required passwords and fernet key for you, see Example Secret CR for secure access to the RHOSO service pods.
You cannot change a service password once the control plane is deployed. If a service password is changed in osp-secret after deploying the control plane, the service is reconfigured to use the new password but the password is not updated in the Identity service (keystone). This results in a service outage.
Prerequisites
- You have installed python3-cryptography.
Procedure
-
Create a
SecretCR on your workstation, for example,openstack_service_secret.yaml. Add the following initial configuration to
openstack_service_secret.yaml:apiVersion: v1 data: AdminPassword: <base64_password> AodhPassword: <base64_password> BarbicanPassword: <base64_password> BarbicanSimpleCryptoKEK: <base64_fernet_key> CeilometerPassword: <base64_password> CinderPassword: <base64_password> DbRootPassword: <base64_password> DesignatePassword: <base64_password> GlancePassword: <base64_password> HeatAuthEncryptionKey: <base64_password> HeatPassword: <base64_password> IronicInspectorPassword: <base64_password> IronicPassword: <base64_password> ManilaPassword: <base64_password> MetadataSecret: <base64_password> NeutronPassword: <base64_password> NovaPassword: <base64_password> OctaviaPassword: <base64_password> PlacementPassword: <base64_password> SwiftPassword: <base64_password> kind: Secret metadata: name: osp-secret namespace: openstack type: OpaqueReplace
<base64_password>with a 32-character key that is base64 encoded.NoteThe
HeatAuthEncryptionKeypassword must be a 32-character key for Orchestration service (heat) encryption. If you increase the length of the passwords for all other services, ensure that theHeatAuthEncryptionKeypassword remains at length 32.You can use the following command to manually generate a base64 encoded password:
$ echo -n <password> | base64Alternatively, if you are using a Linux workstation and you are generating the
SecretCR by using a Bash command such ascat, you can replace<base64_password>with the following command to auto-generate random passwords for each service:$(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)Replace the
<base64_fernet_key>with a base64 encoded fernet key. You can use the following command to manually generate it:$(python3 -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode('UTF-8'))" | base64)
Create the
SecretCR in the cluster:$ oc create -f openstack_service_secret.yaml -n openstackVerify that the
SecretCR is created:$ oc describe secret osp-secret -n openstack
7.3.1. Example Secret CR for secure access to the RHOSO service pods Copia collegamentoCollegamento copiato negli appunti!
You must create a Secret custom resource (CR) file to provide secure access to the Red Hat OpenStack Services on OpenShift (RHOSO) service pods.
If you are using a Linux workstation, you can create a Secret CR file called openstack_service_secret.yaml by using the following Bash cat command that generates the required passwords and fernet key for you:
$ cat <<EOF > openstack_service_secret.yaml
apiVersion: v1
data:
AdminPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
AodhPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
BarbicanPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
BarbicanSimpleCryptoKEK: $(python3 -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode('UTF-8'))" | base64)
CeilometerPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
CinderPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
DbRootPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
DesignatePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
GlancePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
HeatAuthEncryptionKey: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
HeatPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
IronicInspectorPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
IronicPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
ManilaPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
MetadataSecret: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
NeutronPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
NovaPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
OctaviaHeartbeatKey: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
OctaviaPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
PlacementPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
SwiftPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
kind: Secret
metadata:
name: osp-secret
namespace: openstack
type: Opaque
EOF
Chapter 8. Preparing networks for RHOSO with NFV Copia collegamentoCollegamento copiato negli appunti!
To prepare for configuring and deploying your Red Hat OpenStack Services on OpenShift (RHOSO) on a network functions virtualization (NFV) environment, you must configure the Red Hat OpenShift Container Platform (RHOCP) networks on your RHOCP cluster.
8.1. Networks for Red Hat OpenStack Services on OpenShift Copia collegamentoCollegamento copiato negli appunti!
Red Hat OpenStack Services on OpenShift (RHOSO) requires the following physical data center networks.
- Control plane network
- Used by the OpenStack Operator for Ansible SSH access to deploy and connect to the data plane nodes from the Red Hat OpenShift Container Platform (RHOCP) environment. This network is also used by data plane nodes for live migration of instances.
- Designate network
- Used internally by the RHOSO DNS service (designate) to manage the DNS servers. For more information, see Designate networks in Configuring DNS as a service.
- Designateext network
- Used to provide external access to the DNS service resolver and the DNS servers.
- External network
An optional network that is used when required for your environment. For example, you might create an external network for any of the following purposes:
- To provide virtual machine instances with Internet access.
- To create flat provider networks that are separate from the control plane.
- To configure VLAN provider networks on a separate bridge from the control plane.
To provide access to virtual machine instances with floating IPs on a network other than the control plane network.
NoteWhen an external network is used for workloads, an OVN gateway is required in some use cases. For more information, see on use cases and available options, see Configuring a control plane OVN gateway with a dedicated NIC in Configuring networking services.
- Internal API network
- Used for internal communication between RHOSO components.
- Octavia network
- Used to connect Load-balancing service (octavia) controllers running in the control plane. For more information, see Octavia network in Configuring load balancing as a service.
- Storage network
- Used for block storage, RBD, NFS, FC, and iSCSI.
- Storage Management network
An optional network that is used by storage components. For example, Red Hat Ceph Storage uses the Storage Management network in a hyperconverged infrastructure (HCI) environment as the
cluster_networkto replicate data.NoteFor more information about Red Hat Ceph Storage network configuration, see "Ceph network configuration" in the Red Hat Ceph Storage Configuration Guide:
- Tenant (project) network
- Used for data communication between virtual machine instances within the cloud deployment.
Figure 8.1. Physical networks for RHOSO
The following table details the default networks used in a RHOSO deployment.
By default, the control plane and external networks do not use VLANs. Networks that do not use VLANs must be placed on separate NICs. You can use a VLAN for the control plane network on new RHOSO deployments. You can also use the Native VLAN on a trunked interface as the non-VLAN network. For example, you can have the control plane and the internal API on one NIC, and the external network with no VLAN on a separate NIC.
| Network name | CIDR | NetConfig allocationRange | MetalLB IPAddressPool range | net-attach-def ipam range | OCP worker nncp range |
|---|---|---|---|---|---|
|
| 192.168.122.0/24 | 192.168.122.100 - 192.168.122.250 | 192.168.122.80 - 192.168.122.90 | 192.168.122.30 - 192.168.122.70 | 192.168.122.10 - 192.168.122.20 |
|
| 172.26.0.0/24 | n/a | n/a | 172.26.0.30 - 172.26.0.70 | 172.26.0.10 - 172.26.0.20 |
|
| 172.34.0.0/24 | n/a | 172.34.0.80 - 172.34.0.120 | 172.34.0.30 - 172.34.0.70 | 172.34.0.10 - 172.34.0.20 |
|
| 10.0.0.0/24 | 10.0.0.100 - 10.0.0.250 | n/a | n/a | n/a |
|
| 172.17.0.0/24 | 172.17.0.100 - 172.17.0.250 | 172.17.0.80 - 172.17.0.90 | 172.17.0.30 - 172.17.0.70 | 172.17.0.10 - 172.17.0.20 |
|
| 172.23.0.0/24 | n/a | n/a | 172.23.0.30 - 172.23.0.70 | n/a |
|
| 172.18.0.0/24 | 172.18.0.100 - 172.18.0.250 | n/a | 172.18.0.30 - 172.18.0.70 | 172.18.0.10 - 172.18.0.20 |
|
| 172.20.0.0/24 | 172.20.0.100 - 172.20.0.250 | n/a | 172.20.0.30 - 172.20.0.70 | 172.20.0.10 - 172.20.0.20 |
|
| 172.19.0.0/24 | 172.19.0.100 - 172.19.0.250 | n/a | 172.19.0.30 - 172.19.0.70 | 172.19.0.10 - 172.19.0.20 |
8.2. NIC configurations for NFV Copia collegamentoCollegamento copiato negli appunti!
The Red Hat OpenStack Services on OpenShift (RHOSO) nodes that host the data plane require one of the following NIC configurations:
- Single NIC configuration - One NIC for the provisioning network on the native VLAN and tagged VLANs that use subnets for the different data plane network types.
- Dual NIC configuration - One NIC for the provisioning network and the other NIC for the external network.
- Dual NIC configuration - One NIC for the provisioning network on the native VLAN, and the other NIC for tagged VLANs that use subnets for different data plane network types.
- Multiple NIC configuration - Each NIC uses a subnet for a different data plane network type.
8.3. Preparing RHOCP for RHOSO networks Copia collegamentoCollegamento copiato negli appunti!
The Red Hat OpenStack Services on OpenShift (RHOSO) services run as a Red Hat OpenShift Container Platform (RHOCP) workload. A RHOSO environment uses isolated networks to separate different types of network traffic, which improves security, performance, and management. You must connect the RHOCP worker nodes to your isolated networks and expose the internal service endpoints on the isolated networks. The public service endpoints are exposed as RHOCP routes by default, because only routes are supported for public endpoints.
The control plane interface name must be consistent across all nodes because network manifests reference the control plane interface name directly. If the control plane interface names are inconsistent, then the RHOSO environment fails to deploy. If the physical interface names are inconsistent on the nodes, you must create a Linux bond that configures a consistent alternative name for the physical interfaces that can be referenced by the other network manifests.
The examples in the following procedures use IPv4 addresses. You can use IPv6 addresses instead of IPv4 addresses. Dual stack (IPv4 and IPv6) is available only on project (tenant) networks. For information about how to configure IPv6 addresses, see the following resources in the RHOCP Networking guide:
8.3.1. Preparing RHOCP with isolated network interfaces Copia collegamentoCollegamento copiato negli appunti!
You use the NMState Operator to connect the RHOCP worker nodes to your isolated networks. Create a NodeNetworkConfigurationPolicy (nncp) CR to configure the interfaces for each isolated network on each worker node in the RHOCP cluster.
Procedure
-
Create a
NodeNetworkConfigurationPolicy(nncp) CR file on your workstation, for example,openstack-nncp.yaml. Retrieve the names of the worker nodes in the RHOCP cluster:
$ oc get nodes -l node-role.kubernetes.io/worker -o jsonpath="{.items[*].metadata.name}"Discover the network configuration:
$ oc get nns/<worker_node> -o yaml | more-
Replace
<worker_node>with the name of a worker node retrieved in step 2, for example,worker-1. Repeat this step for each worker node.
-
Replace
In the
nncpCR file, configure the interfaces for each isolated network on each worker node in the RHOCP cluster. For information about the default physical data center networks that must be configured with network isolation, see Networks for Red Hat OpenStack Services on OpenShift.In the following example, the
nncpCR configures theenp6s0interface for worker node 1,osp-enp6s0-worker-1, to use VLAN interfaces with IPv4 addresses for network isolation:apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: osp-enp6s0-worker-1 spec: desiredState: interfaces: - description: internalapi vlan interface ipv4: address: - ip: 172.17.0.10 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false name: internalapi state: up type: vlan vlan: base-iface: enp6s0 id: 20 reorder-headers: true - description: storage vlan interface ipv4: address: - ip: 172.18.0.10 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false name: storage state: up type: vlan vlan: base-iface: enp6s0 id: 21 reorder-headers: true - description: tenant vlan interface ipv4: address: - ip: 172.19.0.10 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false name: tenant state: up type: vlan vlan: base-iface: enp6s0 id: 22 reorder-headers: true - description: Configuring enp6s0 ipv4: address: - ip: 192.168.122.10 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false mtu: 1500 name: enp6s0 state: up type: ethernet - description: octavia vlan interface name: octavia state: up type: vlan vlan: base-iface: enp6s0 id: 24 reorder-headers: true - bridge: options: stp: enabled: false port: - name: enp6s0.24 description: Configuring bridge octbr mtu: 1500 name: octbr state: up type: linux-bridge - description: designate vlan interface ipv4: address: - ip: 172.26.0.10 prefix-length: "24" dhcp: false enabled: true ipv6: enabled: false mtu: 1500 name: designate state: up type: vlan vlan: base-iface: enp7s0 id: "25" reorder-headers: true - description: designate external vlan interface ipv4: address: - ip: 172.34.0.10 prefix-length: "24" dhcp: false enabled: true ipv6: enabled: false mtu: 1500 name: designateext state: up type: vlan vlan: base-iface: enp7s0 id: "26" reorder-headers: true nodeSelector: kubernetes.io/hostname: worker-1 node-role.kubernetes.io/worker: ""Create the
nncpCR in the cluster:$ oc apply -f openstack-nncp.yamlVerify that the
nncpCR is created:$ oc get nncp -w NAME STATUS REASON osp-enp6s0-worker-1 Progressing ConfigurationProgressing osp-enp6s0-worker-1 Progressing ConfigurationProgressing osp-enp6s0-worker-1 Available SuccessfullyConfigured
8.3.2. Attaching service pods to the isolated networks Copia collegamentoCollegamento copiato negli appunti!
Create a NetworkAttachmentDefinition (net-attach-def) custom resource (CR) for each isolated network to attach the service pods to the networks.
If you frequently recreate pods in your environment then use the Whereabouts reconciler to manage dynamic IP address assignments for the pods. For more information, see Creating a whereabouts-reconciler daemon set in the RHOCP Multiple networks guide.
Procedure
-
Create a
NetworkAttachmentDefinition(net-attach-def) CR file on your workstation, for example,openstack-net-attach-def.yaml. In the
NetworkAttachmentDefinitionCR file, configure aNetworkAttachmentDefinitionresource for each isolated network to attach a service deployment pod to the network. The following examples create aNetworkAttachmentDefinitionresource for the following networks:-
internalapi,storage,ctlplane, andtenantnetworks of typemacvlan. -
octavia, the load-balancing management network, of typebridge. This network attachment connects pods that manage load balancer virtual machines (amphorae) and the Open vSwitch pods that are managed by the OVN operator. -
designatenetwork used internally by the DNS service (designate) to manage the DNS servers. -
designateextnetwork used to provide external access to the DNS service resolver and the DNS servers.
apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: internalapi namespace: openstack spec: config: | { "cniVersion": "0.3.1", "name": "internalapi", "type": "macvlan", "master": "internalapi", "ipam": { "type": "whereabouts", "range": "172.17.0.0/24", "range_start": "172.17.0.30", "range_end": "172.17.0.70" } } --- apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: ctlplane namespace: openstack spec: config: | { "cniVersion": "0.3.1", "name": "ctlplane", "type": "macvlan", "master": "enp6s0", "ipam": { "type": "whereabouts", "range": "192.168.122.0/24", "range_start": "192.168.122.30", "range_end": "192.168.122.70" } } --- apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: storage namespace: openstack spec: config: | { "cniVersion": "0.3.1", "name": "storage", "type": "macvlan", "master": "storage", "ipam": { "type": "whereabouts", "range": "172.18.0.0/24", "range_start": "172.18.0.30", "range_end": "172.18.0.70" } } --- apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: tenant namespace: openstack spec: config: | { "cniVersion": "0.3.1", "name": "tenant", "type": "macvlan", "master": "tenant", "ipam": { "type": "whereabouts", "range": "172.19.0.0/24", "range_start": "172.19.0.30", "range_end": "172.19.0.70" } } --- apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: labels: osp/net: octavia name: octavia namespace: openstack spec: config: | { "cniVersion": "0.3.1", "name": "octavia", "type": "bridge", "bridge": "octbr", "ipam": { "type": "whereabouts", "range": "172.23.0.0/24", "range_start": "172.23.0.30", "range_end": "172.23.0.70", "routes": [ { "dst": "172.24.0.0/16", "gw" : "172.23.0.150" } ] } } --- apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: designate namespace: openstack spec: config: | { "cniVersion": "0.3.1", "name": "designate", "type": "macvlan", "master": "designate", "ipam": { "type": "whereabouts", "range": "172.26.0.0/16", "range_start": "172.26.0.30", "range_end": "172.26.0.70" } } --- apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: designateext namespace: openstack spec: config: | { "cniVersion": "0.3.1", "name": "designateext", "type": "macvlan", "master": "designateext", "ipam": { "type": "whereabouts", "range": "172.34.0.0/16", "range_start": "172.34.0.30", "range_end": "172.34.0.70" } }-
metadata.namespace: The namespace where the services are deployed. -
"master": The node interface name associated with the network, as defined in thenncpCR. -
"ipam": ThewhereaboutsCNI IPAM plug-in assigns IPs to the created pods from the range.30 - .70. -
"range_start" - "range_end": The IP address pool range must not overlap with the MetalLBIPAddressPoolrange and theNetConfig allocationRange.
-
Create the
NetworkAttachmentDefinitionCR in the cluster:$ oc apply -f openstack-net-attach-def.yamlVerify that the
NetworkAttachmentDefinitionCR is created:$ oc get net-attach-def -n openstack
8.3.3. Preparing RHOCP for RHOSO network VIPS Copia collegamentoCollegamento copiato negli appunti!
You use the MetalLB Operator to expose internal service endpoints on the isolated networks. You must create an L2Advertisement resource to define how the Virtual IPs (VIPs) are announced, and an IPAddressPool resource to configure which IPs can be used as VIPs. In layer 2 mode, one node assumes the responsibility of advertising a service to the local network.
Procedure
-
Create an
IPAddressPoolCR file on your workstation, for example,openstack-ipaddresspools.yaml. In the
IPAddressPoolCR file, configure anIPAddressPoolresource on the isolated network to specify the IP address ranges over which MetalLB has authority:apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: ctlplane spec: addresses: - 192.168.122.80-192.168.122.90 autoAssign: true avoidBuggyIPs: false serviceAllocation: namespaces: - openstack --- apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: internalapi namespace: metallb-system spec: addresses: - 172.17.0.80-172.17.0.90 autoAssign: true avoidBuggyIPs: false --- apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: designateext spec: addresses: - 172.34.0.80-172.34.0.120 autoAssign: true avoidBuggyIPs: false ----
spec.addresses: TheIPAddressPoolrange must not overlap with thewhereaboutsIPAM range and the NetConfigallocationRange. -
spec.serviceAllocation: Specify the namespaces that can consume IP addresses from theIPAddressPoolrange. This is an optional field that you can configure to prevent non-RHOSO services hosted on your RHOCP cluster from consuming IP addresses from theIPAddressPoolrange.
For information about how to configure the other
IPAddressPoolresource parameters, see Configuring MetalLB address pools in the RHOCP Networking guide.-
Create the
IPAddressPoolCR in the cluster:$ oc apply -f openstack-ipaddresspools.yamlVerify that the
IPAddressPoolCR is created:$ oc describe -n metallb-system IPAddressPool-
Create a
L2AdvertisementCR file on your workstation, for example,openstack-l2advertisement.yaml. In the
L2AdvertisementCR file, configureL2AdvertisementCRs to define which node advertises a service to the local network. Create oneL2Advertisementresource for each network.In the following example, each
L2AdvertisementCR specifies that the VIPs requested from the network address pools are announced on the interface that is attached to the VLAN:apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: ctlplane namespace: metallb-system spec: ipAddressPools: - ctlplane interfaces: - enp6s0 nodeSelectors: - matchLabels: node-role.kubernetes.io/worker: "" --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: internalapi namespace: metallb-system spec: ipAddressPools: - internalapi interfaces: - internalapi nodeSelectors: - matchLabels: node-role.kubernetes.io/worker: "" --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: designateext namespace: metallb-system spec: ipAddressPools: - designateext interfaces: - designateext nodeSelectors: - matchLabels: node-role.kubernetes.io/worker: ""-
spec.interfaces: The interface where the VIPs requested from the VLAN address pool are announced.
For information about how to configure the other
L2Advertisementresource parameters, see Configuring MetalLB with a L2 advertisement and label in the RHOCP Networking guide.-
Create the
L2AdvertisementCRs in the cluster:$ oc apply -f openstack-l2advertisement.yamlVerify that the
L2AdvertisementCRs are created:$ oc get -n metallb-system L2Advertisement NAME IPADDRESSPOOLS IPADDRESSPOOL SELECTORS INTERFACES ctlplane ["ctlplane"] ["enp6s0"] designateext ["designateext"] ["designateext"] internalapi ["internalapi"] ["internalapi"] storage ["storage"] ["storage"] tenant ["tenant"] ["tenant"]If your cluster has OVNKubernetes as the network back end, then you must enable global forwarding so that MetalLB can work on a secondary network interface.
Check the network back end used by your cluster:
$ oc get network.operator cluster --output=jsonpath='{.spec.defaultNetwork.type}'If the back end is OVNKubernetes, then run the following command to enable global IP forwarding:
$ oc patch network.operator cluster -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"ipForwarding": "Global"}}}}}' --type=merge
8.4. Creating the data plane network Copia collegamentoCollegamento copiato negli appunti!
To create the data plane network, you define a NetConfig custom resource (CR) and specify all the subnets for the data plane networks. You must define at least one control plane network for your data plane. You can also define VLAN networks to create network isolation for composable networks, such as internalapi, storage, and external. Each network definition must include the IP address assignment.
Use the following commands to view the NetConfig CRD definition and specification schema:
$ oc describe crd netconfig
$ oc explain netconfig.spec
Procedure
-
Create a file named
openstack_netconfig.yamlon your workstation. Add the following configuration to
openstack_netconfig.yamlto create theNetConfigCR:apiVersion: network.openstack.org/v1beta1 kind: NetConfig metadata: name: openstacknetconfig namespace: openstackIn the
openstack_netconfig.yamlfile, define the topology for each data plane network. To use the default Red Hat OpenStack Services on OpenShift (RHOSO) networks, you must define a specification for each network. For information about the default RHOSO networks, see Networks for Red Hat OpenStack Services on OpenShift.NoteIf you are using pre-provisioned data plane nodes, then the control plane network and IP address must match the pre-provisioned data plane nodes. If the
ctlplanenetwork uses tagged VLANs, then the VLAN ID must also match the VLAN ID on the pre-provisioned data plane node.The following example creates isolated networks for the data plane:
spec: networks: - name: ctlplane dnsDomain: ctlplane.example.com subnets: - name: subnet1 allocationRanges: - end: 192.168.122.120 start: 192.168.122.100 - end: 192.168.122.200 start: 192.168.122.150 cidr: 192.168.122.0/24 gateway: 192.168.122.1 - name: internalapi dnsDomain: internalapi.example.com subnets: - name: subnet1 allocationRanges: - end: 172.17.0.250 start: 172.17.0.100 excludeAddresses: - 172.17.0.10 - 172.17.0.12 cidr: 172.17.0.0/24 vlan: 20 - name: external dnsDomain: external.example.com subnets: - name: subnet1 allocationRanges: - end: 10.0.0.250 start: 10.0.0.100 cidr: 10.0.0.0/24 gateway: 10.0.0.1 - name: storage dnsDomain: storage.example.com subnets: - name: subnet1 allocationRanges: - end: 172.18.0.250 start: 172.18.0.100 cidr: 172.18.0.0/24 vlan: 21 - name: tenant dnsDomain: tenant.example.com subnets: - name: subnet1 allocationRanges: - end: 172.19.0.250 start: 172.19.0.100 cidr: 172.19.0.0/24 vlan: 22-
spec.networks.name: The name of the network, for example,ctlplane. -
spec.networks.subnets: The IPv4 subnet specification. -
spec.networks.subnets.name: The name of the subnet, for example,subnet1. -
spec.networks.subnets.allocationRanges: TheNetConfigallocationRange. TheallocationRangemust not overlap with the MetalLBIPAddressPoolrange and the IP address pool range. -
spec.networks.subnets.excludeAddresses: Optional: List of IP addresses from the allocation range that must not be used by data plane nodes. -
spec.networks.subnets.vlan: The network VLAN. For information about the default RHOSO networks, see Networks for Red Hat OpenStack Services on OpenShift.
-
-
Save the
openstack_netconfig.yamldefinition file. Create the data plane network:
$ oc create -f openstack_netconfig.yaml -n openstackTo verify that the data plane network is created, view the
openstacknetconfigresource:$ oc get netconfig/openstacknetconfig -n openstackIf you see errors, check the underlying
network-attach-definitionand node network configuration policies:$ oc get network-attachment-definitions -n openstack $ oc get nncp
Chapter 9. Creating the control plane for NFV environments Copia collegamentoCollegamento copiato negli appunti!
The Red Hat OpenStack Services on OpenShift (RHOSO) control plane hosts the RHOSO services that manage your RHOSO cloud. RHOSO control plane services run as a Red Hat OpenShift Container Platform (RHOCP) workload, deployed by Operators in OpenShift.
When you configure OpenStack control plane services, you use one custom resource (CR) definition called OpenStackControlPlane.
RHOSO control plane services provide APIs. They do not run Compute node workloads.
Creating the control plane also creates an OpenStackClient pod that you can access through a remote shell (rsh) to run RHOSO CLI commands.
$ oc rsh -n openstack openstackclient
9.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- The RHOCP cluster is prepared for RHOSO network isolation. For more information, see Preparing RHOCP for RHOSO networks.
-
The OpenStack Operator (
openstack-operator) is installed. For more information, see Installing and preparing the OpenStack Operator. The RHOCP cluster is not configured with any network policies that prevent communication between the
openstack-operatorsnamespace and the control plane namespace (defaultopenstack). Use the following command to check the existing network policies on the cluster:$ oc get networkpolicy -n openstack-
You are logged on to a workstation that has access to the RHOCP cluster, as a user with
cluster-adminprivileges.
9.2. Creating the control plane Copia collegamentoCollegamento copiato negli appunti!
Define an OpenStackControlPlane custom resource (CR) to perform the following tasks:
- Create the control plane.
- Enable the Red Hat OpenStack Services on OpenShift (RHOSO) services.
The following procedure creates an initial control plane with example configurations for each service. The procedure helps you create an operational control plane environment. You can use the environment to test and troubleshoot issues before additional required service customization. Services can be added and customized after the initial deployment.
To configure a service, you use the CustomServiceConfig field in a service specification to pass OpenStack configuration parameters in INI file format. For more information about the available configuration parameters, see Configuration reference.
For more information on how to customize your control plane after deployment, see the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.
For more information, see Example OpenStackControlPlane CR.
Use the following commands to view the OpenStackControlPlane CRD definition and specification schema:
$ oc describe crd openstackcontrolplane
$ oc explain openstackcontrolplane.spec
For NFV environments, when you add the Networking service (neutron) and OVN service configurations, you must supply the following information:
- Physical networks where your gateways are located.
- Path to vhost sockets.
- VLAN ranges.
- Number of NUMA nodes.
- NICs that connect to the gateway networks.
If you are using SR-IOV, you must also add the sriovnicswitch mechanism driver to the Networking service configuration.
Procedure
Create the
openstackproject for the deployed RHOSO environment:$ oc new-project openstackEnsure the
openstacknamespace is labeled to enable privileged pod creation by the OpenStack Operators:$ oc get namespace openstack -ojsonpath='{.metadata.labels}' | jq { "kubernetes.io/metadata.name": "openstack", "pod-security.kubernetes.io/enforce": "privileged", "security.openshift.io/scc.podSecurityLabelSync": "false" }If the security context constraint (SCC) is not "privileged", use the following commands to change it:
$ oc label ns openstack security.openshift.io/scc.podSecurityLabelSync=false --overwrite $ oc label ns openstack pod-security.kubernetes.io/enforce=privileged --overwriteCreate a file on your workstation named
openstack_control_plane.yamlto define theOpenStackControlPlaneCR:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstackSpecify the
SecretCR you created to provide secure access to the RHOSO service pods in Providing secure access to the Red Hat OpenStack Services on OpenShift services:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane spec: secret: osp-secretSpecify the
storageClassyou created for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane spec: secret: osp-secret storageClass: your-RHOCP-storage-classNoteFor information about storage classes, see Creating a storage class.
Add the following service configurations:
- Block Storage service (cinder)
cinder: apiOverride: route: {} template: databaseInstance: openstack secret: osp-secret cinderAPI: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer cinderScheduler: replicas: 1 cinderBackup: networkAttachments: - storage replicas: 0 # backend needs to be configured to activate the service cinderVolumes: volume1: networkAttachments: - storage replicas: 0 # backend needs to be configured to activate the serviceImportantThis definition for the Block Storage service is only a sample. You might need to modify it for your NFV environment. For more information, see Planning storage and shared file systems in Planning your deployment.
NoteFor the initial control plane deployment, the
cinderBackupandcinderVolumesservices are deployed but not activated (replicas: 0). You can configure your control plane post-deployment with a back end for the Block Storage service and the backup service.- Compute service (nova)
nova: apiOverride: route: {} template: apiServiceTemplate: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer schedulerServiceTemplate: customServiceConfig: | [filter_scheduler] enabled_filters = AvailabilityZoneFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter, PciPassthroughFilter, AggregateInstanceExtraSpecsFilter available_filters = nova.scheduler.filters.all_filters metadataServiceTemplate: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer schedulerServiceTemplate: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer cellTemplates: cell1: noVNCProxyServiceTemplate: enabled: true networkAttachments: - ctlplane secret: osp-secretNoteA full set of Compute services (nova) are deployed by default for each of the default cells,
cell0andcell1:nova-api,nova-metadata,nova-scheduler, andnova-conductor. Thenovncproxyservice is also enabled forcell1by default.- DNS service for the data plane
dns: template: options: - key: server values: - <IP address for DNS server reachable from dnsmasq pod> override: service: metadata: annotations: metallb.universe.tf/address-pool: ctlplane metallb.universe.tf/allow-shared-ip: ctlplane metallb.universe.tf/loadBalancerIPs: 192.168.122.80 spec: type: LoadBalancer replicas: 2-
options: Defines thednsmasqinstances required for each DNS server by using key-value pairs. In this example, there is one key-value pair defined because there is only one DNS server configured to forward requests to. key: Specifies thednsmasqparameter to customize for the deployeddnsmasqinstance. Set to one of the following valid values:-
server -
rev-server -
srv-host -
txt-record -
ptr-record -
rebind-domain-ok -
naptr-record -
cname -
host-record -
caa-record -
dns-rr -
auth-zone -
synth-domain -
no-negcache -
local
-
values: Specifies the value for the DNS server reachable from thednsmasqpod on the RHOCP cluster network. You can specify a generic DNS server as the value, for example,1.1.1.1, or a DNS server for a specific domain, for example,/google.com/8.8.8.8.NoteThis DNS service,
dnsmasq, provides DNS services for nodes on the RHOSO data plane.dnsmasqis different from the RHOSO DNS service (designate) that provides DNS as a service for cloud tenants.
-
- Galera cluster
A Galera cluster for use by all RHOSO services (
openstack), and a Galera cluster for use by the Compute service forcell1(openstack-cell1):galera: templates: openstack: storageRequest: 5000M secret: osp-secret replicas: 3 openstack-cell1: storageRequest: 5000M secret: osp-secret replicas: 3- Identity service (keystone)
keystone: apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack secret: osp-secret replicas: 3- Image service (glance)
glance: apiOverrides: default: route: {} template: databaseInstance: openstack storage: storageRequest: 10G secret: osp-secret keystoneEndpoint: default glanceAPIs: default: replicas: 0 # backend needs to be configured to activate the service override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer networkAttachments: - storageNoteFor the initial control plane deployment, the Image service is deployed but not activated (replicas: 0). You can configure your control plane post-deployment with a back end for the Image service.
- Key Management service (barbican)
barbican: apiOverride: route: {} template: databaseInstance: openstack secret: osp-secret barbicanAPI: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer barbicanWorker: replicas: 3 barbicanKeystoneListener: replicas: 1- Memcached
memcached: templates: memcached: replicas: 3- Networking service (neutron)
neutron: apiOverride: route: {} template: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack secret: osp-secret networkAttachments: - internalapi customServiceConfig: | [DEFAULT] global_physnet_mtu = 9000 [ml2] mechanism_drivers = ovn [ovn] vhost_sock_dir = <path> [ml2_type_vlan] network_vlan_ranges = <network_name1>:<VLAN-ID1>:<VLAN-ID2>,<network_name2>:<VLAN-ID1>:<VLAN-ID2>-
mechanism_drivers- If you are using SR-IOV, you must also add thesriovnicswitchmechanism driver, for example,mechanism_drivers = ovn,sriovnicswitch. -
vhost_sock_dir- Replace<path>with the absolute path to thevhostsockets, for example,/var/lib/vhost_sockets. -
network_vlan_ranges- Replace<network_name1>and<network_name2>with the names of the physical networks that your gateways are on. (This network is set in the neutron networkprovider:*namefield.) -
<VLAN-ID1>- Replace<VLAN-ID1>and`<VLAN-ID2>` with the VLAN IDs you are using.
-
- Object Storage service (swift)
swift: enabled: true proxyOverride: route: {} template: swiftProxy: networkAttachments: - storage override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer replicas: 2 swiftRing: ringReplicas: 3 swiftStorage: networkAttachments: - storage replicas: 3 storageClass: local-storage storageRequest: 100Gi- OVN
ovn: template: ovnDBCluster: ovndbcluster-nb: replicas: 3 dbType: NB storageRequest: 10G networkAttachment: internalapi ovndbcluster-sb: replicas: 3 dbType: SB storageRequest: 10G networkAttachment: internalapi ovnNorthd: {} ovnController: networkAttachment: tenant nicMappings: <network_name>: <NIC_name>-
nicMappings- Replace<network_name>with the name of the physical network your gateway is on. (This network is set in the neutron networkprovider:*namefield.) -
<NIC_name>- Replace<NIC_name>with the name of the NIC connecting to the gateway network. -
<network_name>: <NIC_name>- Optional: add additional<network_name>:<NIC_name>pairs undernicMappingsas required.
-
- Placement service (placement)
placement: apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack replicas: 3 secret: osp-secret- RabbitMQ
rabbitmq: templates: rabbitmq: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.85 spec: type: LoadBalancer rabbitmq-cell1: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.86 spec: type: LoadBalancer- Telemetry service (ceilometer, prometheus)
telemetry: enabled: true template: metricStorage: enabled: true monitoringStack: alertingEnabled: true scrapeInterval: 30s storage: strategy: persistent retention: 24h persistent: pvcStorageRequest: 20G autoscaling: enabled: false aodh: passwordSelectors: databaseAccount: aodh databaseInstance: openstack memcachedInstance: memcached secret: osp-secret heatInstance: heat ceilometer: enabled: true secret: osp-secret logging: enabled: false-
autoscaling- You must have theautoscalingfield present, even if autoscaling is disabled.
-
Create the control plane:
$ oc create -f openstack_control_plane.yaml -n openstackNoteCreating the control plane also creates an
OpenStackClientpod that you can access through a remote shell (rsh) to run RHOSO CLI commands.$ oc rsh -n openstack openstackclientWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Check the status of the control plane deployment:$ oc get openstackcontrolplane -n openstack- Sample output
NAME STATUS MESSAGE openstack-control-plane Unknown Setup startedThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.NoteCreating the control plane also creates an
OpenStackClientpod that you can access through a remote shell (rsh) to run RHOSO CLI commands.$ oc rsh -n openstack openstackclient
Optional: Confirm that the control plane is deployed by reviewing the pods in the
openstacknamespace:$ oc get pods -n openstackThe control plane is deployed when all the pods are either completed or running.
Verification
Open a remote shell connection to the
OpenStackClientpod:$ oc rsh -n openstack openstackclientConfirm that the internal service endpoints are registered with each service:
$ openstack endpoint list -c 'Service Name' -c Interface -c URL --service glance- Sample output
+--------------+-----------+---------------------------------------------------------------+ | Service Name | Interface | URL | +--------------+-----------+---------------------------------------------------------------+ | glance | internal | http://glance-internal.openstack.svc:9292 | | glance | public | http://glance-public-openstack.apps.ostest.test.metalkube.org | +--------------+-----------+---------------------------------------------------------------+
Exit the
OpenStackClientpod:$ exit
9.3. Example OpenStackControlPlane CR Copia collegamentoCollegamento copiato negli appunti!
The following example OpenStackControlPlane CR is a complete control plane configuration that includes all the key services that must always be enabled for a successful deployment.
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
name: openstack-control-plane
namespace: openstack
spec:
messagingBus:
cluster: rabbitmq
notificationsBus:
cluster: rabbitmq
secret: osp-secret
storageClass: your-RHOCP-storage-class
cinder:
apiOverride:
route: {}
template:
databaseInstance: openstack
secret: osp-secret
cinderAPI:
replicas: 3
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
cinderScheduler:
replicas: 1
cinderBackup:
networkAttachments:
- storage
replicas: 0 # backend needs to be configured to activate the service
cinderVolumes:
volume1:
networkAttachments:
- storage
replicas: 0 # backend needs to be configured to activate the service
nova:
apiOverride:
route: {}
template:
apiServiceTemplate:
replicas: 3
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
metadataServiceTemplate:
replicas: 3
override:
service:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
schedulerServiceTemplate:
replicas: 3
cellTemplates:
cell0:
cellDatabaseAccount: nova-cell0
cellDatabaseInstance: openstack
messagingBus:
cluster: rabbitmq
hasAPIAccess: true
cell1:
cellDatabaseAccount: nova-cell1
cellDatabaseInstance: openstack-cell1
messagingBus:
cluster: rabbitmq-cell1
noVNCProxyServiceTemplate:
enabled: true
networkAttachments:
- ctlplane
hasAPIAccess: true
secret: osp-secret
dns:
template:
options:
- key: server
values:
- 192.168.122.1
- key: server
values:
- 192.168.122.2
override:
service:
metadata:
annotations:
metallb.universe.tf/address-pool: ctlplane
metallb.universe.tf/allow-shared-ip: ctlplane
metallb.universe.tf/loadBalancerIPs: 192.168.122.80
spec:
type: LoadBalancer
replicas: 2
galera:
templates:
openstack:
storageRequest: 5000M
secret: osp-secret
replicas: 3
openstack-cell1:
storageRequest: 5000M
secret: osp-secret
replicas: 3
keystone:
apiOverride:
route: {}
template:
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
databaseInstance: openstack
secret: osp-secret
replicas: 3
glance:
apiOverrides:
default:
route: {}
template:
databaseInstance: openstack
storage:
storageRequest: 10G
secret: osp-secret
keystoneEndpoint: default
glanceAPIs:
default:
replicas: 0 # Configure back end; set to 3 when deploying service
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
networkAttachments:
- storage
barbican:
apiOverride:
route: {}
template:
databaseInstance: openstack
secret: osp-secret
barbicanAPI:
replicas: 3
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
barbicanWorker:
replicas: 3
barbicanKeystoneListener:
replicas: 1
memcached:
templates:
memcached:
replicas: 3
neutron:
apiOverride:
route: {}
template:
replicas: 3
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
databaseInstance: openstack
secret: osp-secret
networkAttachments:
- internalapi
swift:
enabled: true
proxyOverride:
route: {}
template:
swiftProxy:
networkAttachments:
- storage
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
replicas: 2
swiftRing:
ringReplicas: 3
swiftStorage:
networkAttachments:
- storage
replicas: 3
storageRequest: 100Gi
ovn:
template:
ovnDBCluster:
ovndbcluster-nb:
replicas: 3
dbType: NB
storageRequest: 10G
networkAttachment: internalapi
ovndbcluster-sb:
replicas: 3
dbType: SB
storageRequest: 10G
networkAttachment: internalapi
ovnNorthd: {}
ovnController:
networkAttachment: tenant
nicMappings:
my-network: nic1
placement:
apiOverride:
route: {}
template:
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
databaseInstance: openstack
replicas: 3
secret: osp-secret
rabbitmq:
templates:
rabbitmq:
persistence:
storage: 10Gi
replicas: 3
override:
service:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.85
spec:
type: LoadBalancer
rabbitmq-cell1:
persistence:
storage: 10Gi
replicas: 3
override:
service:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.86
spec:
type: LoadBalancer
telemetry:
enabled: true
template:
metricStorage:
enabled: true
dashboardsEnabled: true
dataplaneNetwork: ctlplane
networkAttachments:
- ctlplane
monitoringStack:
alertingEnabled: true
scrapeInterval: 30s
storage:
strategy: persistent
retention: 24h
persistent:
pvcStorageRequest: 20G
autoscaling:
enabled: false
aodh:
databaseAccount: aodh
databaseInstance: openstack
passwordSelector:
aodhService: AodhPassword
serviceUser: aodh
secret: osp-secret
heatInstance: heat
ceilometer:
enabled: true
secret: osp-secret
logging:
enabled: false
-
spec.storageClass: The storage class that you created for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end. -
spec.cinder: Service-specific parameters for the Block Storage service (cinder). -
spec.cinder.template.cinderBackup: The Block Storage service back end. For more information on configuring storage services, see the Configuring persistent storage guide. -
spec.cinder.template.cinderVolumes: The Block Storage service configuration. For more information on configuring storage services, see the Configuring persistent storage guide. spec.cinder.template.cinderVolumes.networkAttachments: The list of networks that each service pod is directly attached to, specified by using theNetworkAttachmentDefinitionresource names. A NIC is configured for the service for each specified network attachment.NoteIf you do not configure the isolated networks that each service pod is attached to, then the default pod network is used. For example, the Block Storage service uses the storage network to connect to a storage back end; the Identity service (keystone) uses an LDAP or Active Directory (AD) network; the
ovnDBClusterservice uses theinternalapinetwork; and theovnControllerservice uses thetenantnetwork.-
spec.nova: Service-specific parameters for the Compute service (nova). -
spec.nova.apiOverride: Service API route definition. You can customize the service route by using route-specific annotations. For more information, see Route-specific annotations in the RHOCP Networking guide. Setroute:to{}to apply the default route template. -
nicMappings: Pairs the physical network your gateway is on with the NIC that connects to the gateway network. This physical network is set in the neutron networkprovider:*namefield. You can, optionally, add more<network_name>:<nic_name>pairs as required. -
metallb.universe.tf/address-pool: The internal service API endpoint registered as a MetalLB service with theIPAddressPool internalapi. -
metallb.universe.tf/loadBalancerIPs: The virtual IP (VIP) address for the service. The IP is shared with other services by default. spec.rabbitmq: The RabbitMQ instances exposed to an isolated network with distinct IP addresses defined in theloadBalancerIPsannotation, as indicated in 11 and 12.NoteYou cannot configure multiple RabbitMQ instances on the same virtual IP (VIP) address because all RabbitMQ instances use the same port. If you need to expose multiple RabbitMQ instances to the same network, then you must use distinct IP addresses.
-
rabbitmq.override.service.metadata.annotations.metallb.universe.tf/loadBalancerIPs: The distinct IP address for a RabbitMQ instance that is exposed to an isolated network.
9.4. Removing a service from the control plane Copia collegamentoCollegamento copiato negli appunti!
You can completely remove a service and the service database from the control plane after deployment by disabling the service. Many services are enabled by default, which means that the OpenStack Operator creates resources such as the service database and Identity service (keystone) users, even if no service pod is created because replicas is set to 0.
Remove a service with caution. Removing a service is not the same as stopping service pods. Removing a service is irreversible. Disabling a service removes the service database and any resources that referenced the service are no longer tracked. Create a backup of the service database before removing a service.
Procedure
-
Open the
OpenStackControlPlaneCR file on your workstation. Locate the service you want to remove from the control plane and disable it:
cinder: enabled: false apiOverride: route: {} ...Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP removes the resource related to the disabled service. Run the following command to check the status:
$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup startedThe
OpenStackControlPlaneresource is updated with the disabled service when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.Optional: Confirm that the pods from the disabled service are no longer listed by reviewing the pods in the
openstacknamespace:$ oc get pods -n openstackCheck that the service is removed:
$ oc get cinder -n openstackThis command returns the following message when the service is successfully removed:
No resources found in openstack namespace.Check that the API endpoints for the service are removed from the Identity service (keystone):
$ oc rsh -n openstack openstackclient $ openstack endpoint list --service volumev3This command returns the following message when the API endpoints for the service are successfully removed:
No service with a type, name or ID of 'volumev3' exists.
9.5. Additional resources Copia collegamentoCollegamento copiato negli appunti!
- Kubernetes NMState Operator
- The Kubernetes NMState project
- Load balancing with MetalLB
- MetalLB documentation
- MetalLB in layer 2 mode
- Specify network interfaces that LB IP can be announced from
- Multiple networks
- Using the Multus CNI in OpenShift
- macvlan plugin
- whereabouts IPAM CNI plugin - Extended configuration
- Dynamic provisioning
- Configuring the Block Storage backup service
- Configuring the Image service (glance)
Chapter 10. Creating the data plane for SR-IOV and DPDK environments Copia collegamentoCollegamento copiato negli appunti!
You can deploy Red Hat OpenStack Services on OpenShift (RHOSO) environments that take advantage of the performance and throughput advantages of SR-IOV and DPDK.
The Red Hat OpenStack Services on OpenShift (RHOSO) data plane consists of RHEL 9.4 or 9.6 nodes. Use the OpenStackDataPlaneNodeSet custom resource definition (CRD) to create the custom resources (CRs) that define the nodes and the layout of the data plane. After you have defined your OpenStackDataPlaneNodeSet CRs, you create an OpenStackDataPlaneDeployment CR that deploys each of your OpenStackDataPlaneNodeSet CRs.
An OpenStackDataPlaneNodeSet CR is a logical grouping of nodes of a similar type. A data plane typically consists of multiple OpenStackDataPlaneNodeSet CRs to define groups of nodes with different configurations and roles. You can use pre-provisioned or unprovisioned nodes in an OpenStackDataPlaneNodeSet CR:
- Pre-provisioned node: You have used your own tooling to install the operating system on the node before adding it to the data plane.
- Unprovisioned node: The node does not have an operating system installed before you add it to the data plane. The node is provisioned by using the Cluster Baremetal Operator (CBO) as part of the data plane creation and deployment process.
You cannot include both pre-provisioned and unprovisioned nodes in the same OpenStackDataPlaneNodeSet CR.
To create and deploy a data plane, you must perform the following tasks:
-
Create a
SecretCR for each node set for Ansible to use to execute commands on the data plane nodes. -
Create the
OpenStackDataPlaneNodeSetCRs that define the nodes and layout of the data plane. -
Create the
OpenStackDataPlaneDeploymentCR that triggers the Ansible execution that deploys and configures the software for the specified list ofOpenStackDataPlaneNodeSetCRs.
The following procedures create two simple node sets, one with pre-provisioned nodes, and one with bare-metal nodes that must be provisioned during the node set deployment. The procedures aim to get you up and running quickly with a data plane environment that you can use to troubleshoot issues and test the environment before adding all the customizations you require. You can add additional node sets to a deployed environment, and you can customize your deployed environment by updating the common configuration in the default ConfigMap CR for the service, and by creating custom services. For more information on how to customize your data plane after deployment, see the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.
10.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- A functional control plane, created with the OpenStack Operator. For more information, see Creating the control plane for NFV environments.
-
You are logged on to a workstation that has access to the Red Hat OpenShift Container Platform (RHOCP) cluster as a user with
cluster-adminprivileges.
10.2. Creating the data plane secrets Copia collegamentoCollegamento copiato negli appunti!
You must create the Secret custom resources (CRs) that the data plane requires to be able to operate. The Secret CRs are used by the data plane nodes to secure access between nodes, to register the node operating systems with the Red Hat Customer Portal, to enable node repositories, and to provide Compute nodes with access to libvirt.
To enable secure access between nodes, you must generate two SSH keys and create an SSH key Secret CR for each key:
An SSH key to enable Ansible to manage the RHEL nodes on the data plane. Ansible executes commands with this user and key. You can create an SSH key for each
OpenStackDataPlaneNodeSetCR in your data plane.- An SSH key to enable migration of instances between Compute nodes.
Prerequisites
-
Pre-provisioned nodes are configured with an SSH public key in the
$HOME/.ssh/authorized_keysfile for a user with passwordlesssudoprivileges. For more information, see Managing sudo access in the RHEL Configuring basic system settings guide.
Procedure
For unprovisioned nodes, create the SSH key pair for Ansible:
$ ssh-keygen -f <key_file_name> -N "" -t rsa -b 4096-
Replace
<key_file_name>with the name to use for the key pair.
-
Replace
Create the
SecretCR for Ansible and apply it to the cluster:$ oc create secret generic dataplane-ansible-ssh-private-key-secret \ --save-config \ --dry-run=client \ --from-file=ssh-privatekey=<key_file_name> \ --from-file=ssh-publickey=<key_file_name>.pub \ [--from-file=authorized_keys=<key_file_name>.pub] -n openstack \ -o yaml | oc apply -f --
Replace
<key_file_name>with the name and location of your SSH key pair file. -
Optional: Only include the
--from-file=authorized_keysoption for bare-metal nodes that must be provisioned when creating the data plane.
-
Replace
If you are creating Compute nodes, create a secret for migration.
Create the SSH key pair for instance migration:
$ ssh-keygen -f ./nova-migration-ssh-key -t ecdsa-sha2-nistp521 -N ''Create the
SecretCR for migration and apply it to the cluster:$ oc create secret generic nova-migration-ssh-key \ --save-config \ --from-file=ssh-privatekey=nova-migration-ssh-key \ --from-file=ssh-publickey=nova-migration-ssh-key.pub \ -n openstack \ -o yaml | oc apply -f -
For nodes that have not been registered to the Red Hat Customer Portal, create the
SecretCR for subscription-manager credentials to register the nodes:$ oc create secret generic subscription-manager \ --from-literal rhc_auth='{"login": {"username": "<subscription_manager_username>", "password": "<subscription_manager_password>"}}'-
Replace
<subscription_manager_username>with the username you set forsubscription-manager. -
Replace
<subscription_manager_password>with the password you set forsubscription-manager.
-
Replace
Create a
SecretCR that contains the Red Hat registry credentials:$ oc create secret generic redhat-registry --from-literal edpm_container_registry_logins='{"registry.redhat.io": {"<username>": "<password>"}}'Replace
<username>and<password>with your Red Hat registry username and password credentials.For information about how to create your registry service account, see the Knowledge Base article Creating Registry Service Accounts.
If you are creating Compute nodes, create a secret for libvirt.
Create a file on your workstation named
secret_libvirt.yamlto define the libvirt secret:apiVersion: v1 kind: Secret metadata: name: libvirt-secret namespace: openstack type: Opaque data: LibvirtPassword: <base64_password>Replace
<base64_password>with a base64-encoded string with maximum length 63 characters. You can use the following command to generate a base64-encoded password:$ echo -n <password> | base64TipIf you do not want to base64-encode the username and password, you can use the
stringDatafield instead of thedatafield to set the username and password.
Create the
SecretCR:$ oc apply -f secret_libvirt.yaml -n openstack
Verify that the
SecretCRs are created:$ oc describe secret dataplane-ansible-ssh-private-key-secret $ oc describe secret nova-migration-ssh-key $ oc describe secret subscription-manager $ oc describe secret redhat-registry $ oc describe secret libvirt-secret
10.3. Creating a custom SR-IOV Compute service Copia collegamentoCollegamento copiato negli appunti!
You must create a custom SR-IOV Compute service for NFV in a Red Hat OpenStack Services on OpenShift (RHOSO) environment. This service is an Ansible service that is executed on the data plane. This custom service performs the following tasks on the SR-IOV Compute nodes:
- Applies CPU pinning parameters.
- Performs PCI passthrough.
To create the SR-IOV custom service, you must perform these actions:
-
Create a
ConfigMapfor CPU pinning that maps a CPU pinning configuration to a specified set of SR-IOV Compute nodes. -
Create a
ConfigMapfor PCI passthrough that maps a PCI passthrough configuration to a specified set of SR-IOV Compute nodes. -
Create the actual SR-IOV custom service that will implement the
configMapson your data plane.
Prerequisites
-
You have the
occommand line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges.
Procedure
Create a
ConfigMapCR that defines configurations for CPU pinning and PCI passthrough, and save it to a YAML file on your workstation, for example,pinning-passthrough.yaml.Change the values (in boldface) as appropriate for your environment:
--- apiVersion: v1 kind: ConfigMap metadata: name: cpu-pinning-nova data: 25-cpu-pinning-nova.conf: | [DEFAULT] reserved_host_memory_mb = 4096 [compute] cpu_shared_set = 0-3,24-27 cpu_dedicated_set = 8-23,32-47 [neutron] physnets = <network_name1>, <network_name2> [neutron_physnet_<network_name1>] numa_nodes = <ID list> [neutron_physnet_<network_name2>] numa_nodes = <ID list> [neutron_tunnel] numa_nodes = <ID list> --- apiVersion: v1 kind: ConfigMap metadata: name: sriov-nova data: 26-sriov-nova.conf: | [libvirt] cpu_power_management=false [pci] passthrough_whitelist = {"address": "0000:05:00.2", "physical_network":"sriov-1", "trusted":"true"} passthrough_whitelist = {"address": "0000:05:00.3", "physical_network":"sriov-2", "trusted":"true"} ----
cpu_shared_set: enter a comma-separated list or range of physical host CPU numbers used to provide vCPU inventory, determine the host CPUs that unpinned instances can be scheduled to, and determine the host CPUs that instance emulator threads should be offloaded to for instances configured with the share emulator thread policy. -
cpu_dedicated_set: enter a comma-separated list or range of physical host CPU numbers to which processes for pinned instance CPUs can be scheduled. For example,4-12,^8,15reserves cores from 4-12 and 15, excluding 8. -
<network_name_n_>: replace<network_name1>and<network_name2>with the names of the physical networks that your gateways are on. (This network is set in the neutron networkprovider:*namefield.) numa_nodes = <ID list>: replace<ID list>with a comma-separated list of IDs of the NUMA nodes associated with this physnet. For example:0,1. For example:[neutron] physnets = foo,bar [neutron_physnet_foo] numa_nodes = 0 [neutron_physnet_bar] numa_nodes = 2, 3This configuration ensures that instances using one or more L2-type networks with
provider:physical_network=foomust be scheduled on host cores from NUMA node 0, while instances using one or more networks withprovider:physical_network=barmust be scheduled on host cores from both NUMA nodes 2 and 3. For the latter case, it will be necessary to split the guest across two or more host NUMA nodes using thehw:numa_nodes extraspec.-
passthrough_whitelist: specify valid NIC addresses and names for"address"and"physical_network".
-
Create the
ConfigMapobject, using theConfigMapCR file:- Example
$ oc create -f sriov-pinning-passthru.yaml -n openstack
Create an
OpenStackDataPlaneServiceCR that defines the SR-IOV custom service, and save it to a YAML file on your workstation, for examplenova-custom-sriov.yaml:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: nova-custom-sriovAdd the
ConfigMapCRs to the custom service, and specify theSecretCR for the cell that the node set that runs this service connects to:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: nova-custom-sriov spec: label: dataplane-deployment-nova-custom-sriov dataSources: - configMapRef: name: cpu-pinning-nova - configMapRef: name: sriov-nova - secretRef: name: nova-cell1-compute-config - secretRef: name: nova-migration-ssh-key tlsCerts: default: contents: - dnsnames - ips networks: - ctlplane issuer: osp-rootca-issuer-internal caCerts: combined-ca-bundleSpecify the Ansible commands to create the custom service, by referencing an Ansible playbook or by including the Ansible play in the
playbookContentsfield:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: nova-custom-sriov spec: label: dataplane-deployment-nova-custom-sriov edpmServiceType: nova dataSources: - configMapRef: name: cpu-pinning-nova - configMapRef: name: sriov-nova - secretRef: name: nova-cell1-compute-config - secretRef: name: nova-migration-ssh-key playbook: osp.edpm.nova tlsCerts: default: contents: - dnsnames - ips networks: - ctlplane issuer: osp-rootca-issuer-internal caCerts: combined-ca-bundleplaybook: identifies the default playbook available for your service.In this case, it is the Compute service (nova). To see the listing of default playbooks, see https://openstack-k8s-operators.github.io/edpm-ansible/playbooks.html.
Create the
custom-nova-sriovservice:$ oc apply -f nova-custom-sriov.yaml -n openstackVerify that the custom service is created:
$ oc get openstackdataplaneservice nova-custom-sriov -o yaml -n openstack
10.4. Creating a custom OVS-DPDK Compute service Copia collegamentoCollegamento copiato negli appunti!
You must create a custom OVS-DPDK Compute service for NFV in a Red Hat OpenStack Services on OpenShift (RHOSO) environment. This service is an Ansible service that is executed on the data plane. This custom service applies various parameters on the OVS-DPDK Compute nodes, including CPU pinning parameters, a block migration parameter, and a NUMA-aware vswitch feature that allows instances to spawn in the NUMA node that is connected to the NIC used by the OVS bridge.
To create the OVS-DPDK custom service, you must perform these actions:
-
Create a
ConfigMapthat maps the configurations to a specified set of OVS-DPDK Compute nodes. -
Create the actual OVS-DPDK custom service that will implement the
ConfigMapon your data plane.
Prerequisites
-
You have the
occommand line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges.
Procedure
Create a
ConfigMapCR that defines a configuration for the parameters, and save it to a YAML file on your workstation, for example,dpdk-custom.yaml.Change the values (in boldface) as appropriate for your environment:
--- apiVersion: v1 kind: ConfigMap metadata: name: dpdk-custom-nova namespace: openstack data: 25-dpdk-custom-nova.conf: | [DEFAULT] reserved_host_memory_mb = 4096 [compute] cpu_shared_set = 0-3,24-27 cpu_dedicated_set = 8-23,32-47 [neutron] physnets = <network_name1>, <network_name2> [neutron_physnet_<network_name1>] numa_nodes = <ID list> [neutron_physnet_<network_name2>] numa_nodes = <ID list> [neutron_tunnel] numa_nodes = <ID list> [libvirt] live_migration_permit_post_copy=false ----
cpu_shared_set: enter a comma-separated list or range of physical host CPU numbers used to provide vCPU inventory, determine the host CPUs that unpinned instances can be scheduled to, and determine the host CPUs that instance emulator threads should be offloaded to for instances configured with the share emulator thread policy. -
cpu_dedicated_set: enter a comma-separated list or range of physical host CPU numbers to which processes for pinned instance CPUs can be scheduled. For example,4-12,^8,15reserves cores from 4-12 and 15, excluding 8. -
<network_name_n_>: replace<network_name1>and<network_name2>with the names of the physical networks that your gateways are on, for which you need to configure NUMA affinity. (This network is set in the neutron networkprovider:*namefield.) <ID list>: replace<ID list>with a comma-separated list of IDs of the NUMA nodes associated with this physnet. For example:0,1. For example:[neutron] physnets = foo,bar [neutron_physnet_foo] numa_nodes = 0 [neutron_physnet_bar] numa_nodes = 2, 3This configuration ensures that instances using one or more L2-type networks with
provider:physical_network=foo`must be scheduled on host cores from NUMA node 0, while instances using one or more networks withprovider:physical_network=bar`must be scheduled on host cores from both NUMA nodes 2 and 3. For the latter case, it will be necessary to split the guest across two or more host NUMA nodes using thehw:numa_nodes extra`spec.-
live_migration_permit_post_copy=false: necessary for successful block live migration of instances attached to a Geneve network with DPDK.
-
Create the
ConfigMapobject, using theConfigMapCR file:- Example
$ oc create -f dpdk-custom.yaml -n openstack
Create an
OpenStackDataPlaneServiceCR that defines the OVS-DPDK custom service, and save it to a YAML file on your workstation, for examplenova-custom-ovsdpdk.yaml:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: nova-custom-ovsdpdk namespace: openstackAdd the
ConfigMapCR to the custom service, and specify theSecretCR for the cell that the node set that runs this service connects to:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: nova-custom-ovsdpdk namespace: openstack spec: label: dataplane-deployment-nova-custom-ovsdpdk edpmServiceType: nova dataSources: - configMapRef: name: dpdk-custom-nova - secretRef: name: nova-cell1-compute-config - secretRef: name: nova-migration-ssh-key tlsCerts: default: contents: - dnsnames - ips networks: - ctlplane issuer: osp-rootca-issuer-internal caCerts: combined-ca-bundleSpecify the Ansible commands to create the custom service, by referencing an Ansible playbook or by including the Ansible play in the
playbookContentsfield:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: nova-custom-ovsdpdk namespace: openstack spec: label: dataplane-deployment-nova-custom-ovsdpdk edpmServiceType: nova dataSources: - configMapRef: name: dpdk-custom-nova - secretRef: name: nova-cell1-compute-config - secretRef: name: nova-migration-ssh-key playbook: osp.edpm.nova tlsCerts: default: contents: - dnsnames - ips networks: - ctlplane issuer: osp-rootca-issuer-internal caCerts: combined-ca-bundleplaybook: identifies the default playbook available for your service.In this case, it is the Compute service (nova). To see the listing of default playbooks, see https://openstack-k8s-operators.github.io/edpm-ansible/playbooks.html.
Create the
nova-custom-ovsdpdkservice:$ oc apply -f nova-custom-ovsdpdk.yaml -n openstackVerify that the custom service is created:
$ oc get openstackdataplaneservice nova-custom-ovsdpdk -o yaml -n openstack
10.5. Creating a set of data plane nodes with pre-provisioned nodes Copia collegamentoCollegamento copiato negli appunti!
Define an OpenStackDataPlaneNodeSet custom resource (CR) for each logical grouping of pre-provisioned nodes in your data plane, for example, nodes grouped by hardware, location, or networking. You can define as many node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet CR. Each node set can be connected to only one Compute cell. By default, node sets are connected to cell1. If you customize your control plane to include additional Compute cells, you must specify the cell to which the node set is connected. For more information on adding Compute cells, see Connecting an OpenStackDataPlaneNodeSet CR to a Compute cell in the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.
You use the nodeTemplate field to configure the properties that all nodes in an OpenStackDataPlaneNodeSet CR share, and the nodeTemplate.nodes field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate.
Procedure
Create a file on your workstation named
openstack_preprovisioned_node_set.yamlto define theOpenStackDataPlaneNodeSetCR:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-data-plane namespace: openstack spec: env: - name: ANSIBLE_FORCE_COLOR value: "True"-
name- TheOpenStackDataPlaneNodeSetCR name must be unique, contain only lower case alphanumeric characters and-(hyphens) or.(periods), start and end with an alphanumeric character, and have a maximum length of 53 characters. Update the name in this example to a name that reflects the nodes in the set. -
env- Optional: a list of environment variables to pass to the pod.
-
Specify that the nodes in this set are pre-provisioned:
preProvisioned: trueAdd the SSH key secret that you created to enable Ansible to connect to the data plane nodes:
nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key>-
Replace
<secret-key>with the name of the SSH keySecretCR you created for this node set in Creating the data plane secrets, for example,dataplane-ansible-ssh-private-key-secret.
-
Replace
-
Create a Persistent Volume Claim (PVC) in the
openstacknamespace on your Red Hat OpenShift Container Platform (RHOCP) cluster to store logs. Set thevolumeModetoFilesystemandaccessModestoReadWriteOnce. Do not request storage for logs from a PersistentVolume (PV) that uses the NFS volume plugin. NFS is incompatible with FIFO and theansible-runnercreates a FIFO file to write to store logs. For information about PVCs, see Understanding persistent storage in the RHOCP Storage guide and Red Hat OpenShift Container Platform cluster requirements in Planning your deployment. Enable persistent logging for the data plane nodes:
nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key> extraMounts: - extraVolType: Logs volumes: - name: ansible-logs persistentVolumeClaim: claimName: <pvc_name> mounts: - name: ansible-logs mountPath: "/runner/artifacts"-
Replace
<pvc_name>with the name of the PVC storage on your RHOCP cluster.
-
Replace
-
Add the common configuration for the set of nodes in this group under the
nodeTemplatesection. Each node in thisOpenStackDataPlaneNodeSetinherits this configuration. For information about the properties you can use to configure common node attributes, seeOpenStackDataPlaneNodeSetCRspecproperties. Register the operating system of the nodes that are not registered to the Red Hat Customer Portal, and enable repositories for your nodes. The following steps demonstrate how to register your nodes to CDN. For details on how to register your nodes with Red Hat Satellite 6.13, see Managing Hosts.
Create a
SecretCR that contains thesubscription-managercredentials:apiVersion: v1 kind: Secret metadata: name: subscription-manager data: username: <base64_encoded_username> password: <base64_encoded_password>Create a
SecretCR that contains the Red Hat registry credentials:$ oc create secret generic redhat-registry --from-literal edpm_container_registry_logins='{"registry.redhat.io": {"<username>": "<password>"}}'-
Replace
<username>and<password>with your Red Hat registry username and password credentials.
-
Replace
For information about how to create your registry service account, see the Red Hat Knowledgebase article Creating Registry Service Accounts.
Specify the
SecretCRs to use to source the usernames and passwords:nodeTemplate: ansible: ... ansibleVarsFrom: - prefix: subscription_manager_ secretRef: name: subscription-manager - secretRef: name: redhat-registry ansibleVars: rhc_release: 9.4 rhc_repositories: - {name: "*", state: disabled} - {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled} - {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled} - {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled} - {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled} - {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled} - {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled}
Define each node in this node set:
nodes: edpm-compute-0: hostName: edpm-compute-0 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.100 - name: internalapi subnetName: subnet1 fixedIP: 172.17.0.100 - name: storage subnetName: subnet1 fixedIP: 172.18.0.100 - name: tenant subnetName: subnet1 fixedIP: 172.19.0.100 ansible: ansibleHost: 192.168.122.100 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-compute-0.example.com edpm_network_config_nmstate: true edpm-compute-1: hostName: edpm-compute-1 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.101 - name: internalapi subnetName: subnet1 fixedIP: 172.17.0.101 - name: storage subnetName: subnet1 fixedIP: 172.18.0.101 - name: tenant subnetName: subnet1 fixedIP: 172.19.0.101 ansible: ansibleHost: 192.168.122.101 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-compute-1.example.com-
edpm-compute-0- The node definition reference, for example,edpm-compute-0. Each node in the node set must have a node definition. -
networks- Defines the IPAM and the DNS records for the node. -
fixedIP- Specifies a predictable IP address for the network that must be in the allocation range defined for the network in theNetConfigCR. ansibleVars- Node-specific Ansible variables that customize the node.Note-
Nodes defined within the
nodessection can configure the same Ansible variables that are configured in thenodeTemplatesection. Where an Ansible variable is configured for both a specific node and within thenodeTemplatesection, the node-specific values override those from thenodeTemplatesection. -
You do not need to replicate all the
nodeTemplateAnsible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node. -
Many
ansibleVarsincludeedpmin the name, which stands for "External Data Plane Management".
-
Nodes defined within the
edpm_network_config_nmstate- Sets theos-net-configprovider tonmstate. The default value istrue. Change it tofalseonly if a specific limitation of thenmstateprovider requires you to use theifcfgprovider. For more information on advantages and limitations of thenmstateprovider, see https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/planning_your_deployment/plan-networks_planning#plan-os-net-config_plan-network in Planning your deployment.For more information, see:
-
OpenStackDataPlaneNodeSetCR properties - Network interface configuration options
- Example custom network interfaces for NFV
-
In the
servicessection, add the list of services that will run on the data plane node. Ensure that you replacenovawithnova-custom-sriov, ornova-custom-ovsdpdk, or both:... services: - bootstrap - download-cache - reboot-os - configure-ovs-dpdk - configure-network - validate-network - install-os - configure-os - ssh-known-hosts - run-os - install-certs - ovn - neutron-ovn - neutron-ovn-igmp - neutron-metadata - libvirt - nova-custom-sriov - nova-custom-ovsdpdk - telemetry ...-
Save the
openstack_preprovisioned_node_set.yamldefinition file. Create the data plane resources:
$ oc create -f openstack_preprovisioned_node_set.yaml -n openstackVerify that the data plane resources have been created:
$ oc get openstackdataplanenodeset -n openstack- Sample output
NAME STATUS MESSAGE openstack-data-plane False Deployment not started
For information about the meaning of the returned status, see Data plane conditions and states.
Verify that the
Secretresource was created for the node set:$ oc get secret | grep openstack-data-planeSample output
dataplanenodeset-openstack-data-plane Opaque 1 3m50sVerify the services were created:
$ oc get openstackdataplaneservice -n openstackSample output
NAME AGE configure-network 6d7h configure-os 6d6h install-os 6d6h run-os 6d6h validate-network 6d6h ovn 6d6h libvirt 6d6h nova 6d6h telemetry 6d6h
10.5.1. Example OpenStackDataPlaneNodeSet CR for pre-provisioned nodes with a single NIC Copia collegamentoCollegamento copiato negli appunti!
The following example OpenStackDataPlaneNodeSet CR creates a node set from pre-provisioned Compute nodes with some node-specific configuration. The example includes optional fields. Review the example and update the optional fields to the correct values for your environment or remove them before using the example in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment.
The example that follows for an OpenStackDataPlaneNodeSet CR assumes that the node contains a single NIC.
Update the name of the OpenStackDataPlaneNodeSet CR in this example to a name that reflects the nodes in the set. The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters.
The following variables are autogenerated from IPAM and DNS and are not provided by the user:
-
ctlplane_dns_nameservers -
dns_search_domains -
ctlplane_host_routes
apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
metadata:
name: openstack-data-plane
namespace: openstack
spec:
env:
- name: ANSIBLE_FORCE_COLOR
value: "True"
networkAttachments:
- ctlplane
preProvisioned: true
nodeTemplate:
ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret
extraMounts:
- extraVolType: Logs
volumes:
- name: ansible-logs
persistentVolumeClaim:
claimName: <pvc_name>
mounts:
- name: ansible-logs
mountPath: "/runner/artifacts"
managementNetwork: ctlplane
ansible:
ansibleUser: cloud-admin
ansiblePort: 22
ansibleVarsFrom:
- secretRef:
name: subscription-manager
- secretRef:
name: redhat-registry
ansibleVars:
timesync_ntp_servers:
- hostname: ntp.example.com
iburst: true
- hostname: ntp2.example.com
iburst: false
rhc_release: 9.4
rhc_repositories:
- {name: "*", state: disabled}
- {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled}
- {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled}
- {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled}
- {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled}
- {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled}
- {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled}
edpm_bootstrap_release_version_package: []
edpm_network_config_os_net_config_mappings:
edpm-compute-0:
nic1: 52:54:04:60:55:22
neutron_physical_bridge_name: br-ex
neutron_public_interface_name: eth0
edpm_network_config_template: |
---
{% set mtu_list = [ctlplane_mtu] %}
{% for network in nodeset_networks %}
{{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
{%- endfor %}
{% set min_viable_mtu = mtu_list | max %}
network_config:
- type: ovs_bridge
name: {{ neutron_physical_bridge_name }}
mtu: {{ min_viable_mtu }}
use_dhcp: false
dns_servers: {{ ctlplane_dns_nameservers }}
domain: {{ dns_search_domains }}
addresses:
- ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
routes: {{ ctlplane_host_routes }}
members:
- type: interface
name: nic1
mtu: {{ min_viable_mtu }}
# force the MAC address of the bridge to this interface
primary: true
{% for network in nodeset_networks %}
- type: vlan
mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
addresses:
- ip_netmask:
{{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
{% endfor %}
nodes:
edpm-compute-0:
hostName: edpm-compute-0
networks:
- name: ctlplane
subnetName: subnet1
defaultRoute: true
fixedIP: 192.168.122.100
- name: internalapi
subnetName: subnet1
fixedIP: 172.17.0.100
- name: storage
subnetName: subnet1
fixedIP: 172.18.0.100
- name: tenant
subnetName: subnet1
fixedIP: 172.19.0.100
ansible:
ansibleHost: 192.168.122.100
ansibleUser: cloud-admin
ansibleVars:
fqdn_internal_api: edpm-compute-0.example.com
edpm-compute-1:
hostName: edpm-compute-1
networks:
- name: ctlplane
subnetName: subnet1
defaultRoute: true
fixedIP: 192.168.122.101
- name: internalapi
subnetName: subnet1
fixedIP: 172.17.0.101
- name: storage
subnetName: subnet1
fixedIP: 172.18.0.101
- name: tenant
subnetName: subnet1
fixedIP: 172.19.0.101
ansible:
ansibleHost: 192.168.122.101
ansibleUser: cloud-admin
ansibleVars:
fqdn_internal_api: edpm-compute-1.example.com
services:
- bootstrap
- download-cache
- reboot-os
- configure-ovs-dpdk
- configure-network
- validate-network
- install-os
- configure-os
- ssh-known-hosts
- run-os
- install-certs
- ovn
- neutron-ovn
- neutron-ovn-igmp
- neutron-metadata
- libvirt
- nova-custom-sriov
- nova-custom-ovsdpdk
- telemetry
10.6. Creating a set of data plane nodes with unprovisioned nodes Copia collegamentoCollegamento copiato negli appunti!
Define an OpenStackDataPlaneNodeSet custom resource (CR) for each logical grouping of unprovisioned nodes in your data plane, for example, nodes grouped by hardware, location, or networking. You can define as many node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet CR. Each node set can be connected to only one Compute cell. By default, node sets are connected to cell1. If you customize your control plane to include additional Compute cells, you must specify the cell to which the node set is connected. For more information on adding Compute cells, see Connecting an OpenStackDataPlaneNodeSet CR to a Compute cell in the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.
You use the nodeTemplate field to configure the properties that all nodes in an OpenStackDataPlaneNodeSet CR share, and the nodeTemplate.nodes field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate.
For more information about provisioning bare-metal nodes, see Planning provisioning for bare-metal data plane nodes in Planning your deployment.
Prerequisites
- Cluster Baremetal Operator (CBO) is installed and configured for provisioning. For more information, see Planning provisioning for bare-metal data plane nodes in Planning your deployment.
-
A
BareMetalHostCR is registered and inspected for each bare-metal data plane node. Each bare-metal node must be in theAvailablestate after inspection.
Procedure
Create a file on your workstation named
openstack_unprovisioned_node_set.yamlto define theOpenStackDataPlaneNodeSetCR:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-data-plane namespace: openstack spec: tlsEnabled: true env: - name: ANSIBLE_FORCE_COLOR value: "True"-
name- TheOpenStackDataPlaneNodeSetCR name must be unique, contain only lower case alphanumeric characters and-(hyphens) or.(periods), start and end with an alphanumeric character, and have a maximum length of 53 characters. Update the name in this example to a name that reflects the nodes in the set. -
env- Optional: a list of environment variables to pass to the pod.
-
Define the
baremetalSetTemplatefield to describe the configuration of the bare-metal nodes:preProvisioned: false baremetalSetTemplate: deploymentSSHSecret: dataplane-ansible-ssh-private-key-secret bmhNamespace: <bmh_namespace> cloudUserName: <ansible_ssh_user> bmhLabelSelector: app: <bmh_label> ctlplaneInterface: <interface>-
Replace
<bmh_namespace>with the namespace defined in the correspondingBareMetalHostCR for the node. -
Replace
<ansible_ssh_user>with the username of the Ansible SSH user. -
Replace
<bmh_label>with the metadata label defined in the correspondingBareMetalHostCR for the node, for example,openstack. Metadata labels, such asapp,workload, andnodeNameare key-value pairs for labelling nodes. Set thebmhLabelSelectorfield to select data plane nodes based on one or more labels that match the labels in the correspondingBareMetalHostCR. -
Replace
<interface>with the control plane interface the node connects to, for example,enp6s0.
-
Replace
The BMO manages
BareMetalHostCRs in theopenshift-machine-apinamespace by default. You must update theProvisioningCR to watch all namespaces:$ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"watchAllNamespaces": true }}'Add the SSH key secret that you created to enable Ansible to connect to the data plane nodes:
nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key>-
Replace
<secret-key>with the name of the SSH keySecretCR you created in Creating the data plane secrets, for example,dataplane-ansible-ssh-private-key-secret.
-
Replace
-
Create a Persistent Volume Claim (PVC) in the
openstacknamespace on your RHOCP cluster to store logs. Set thevolumeModetoFilesystemandaccessModestoReadWriteOnce. Do not request storage for logs from a PersistentVolume (PV) that uses the NFS volume plugin. NFS is incompatible with FIFO and theansible-runnercreates a FIFO file to write to store logs. For information about PVCs, see Understanding persistent storage in the RHOCP Storage guide and Red Hat OpenShift Container Platform cluster requirements in Planning your deployment. Enable persistent logging for the data plane nodes:
nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key> extraMounts: - extraVolType: Logs volumes: - name: ansible-logs persistentVolumeClaim: claimName: <pvc_name> mounts: - name: ansible-logs mountPath: "/runner/artifacts"-
Replace
<pvc_name>with the name of the PVC storage on your RHOCP cluster.
-
Replace
Add the common configuration for the set of nodes in this group under the
nodeTemplatesection. Each node in thisOpenStackDataPlaneNodeSetinherits this configuration.For more information, see:
Define each node in this node set:
nodes: edpm-compute-0: hostName: edpm-compute-0 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.100 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: tenant subnetName: subnet1 ansible: ansibleHost: 192.168.122.100 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-compute-0.example.com edpm_network_config_nmstate: true edpm-compute-1: hostName: edpm-compute-1 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.101 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: tenant subnetName: subnet1 ansible: ansibleHost: 192.168.122.101 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-compute-1.example.com-
edpm-compute-0- The node definition reference, for example,edpm-compute-0. Each node in the node set must have a node definition. -
networks- Defines the IPAM and the DNS records for the node. -
fixedIP- Specifies a predictable IP address for the network that must be in the allocation range defined for the network in theNetConfigCR. ansibleVars- Node-specific Ansible variables that customize the node.Note-
Nodes defined within the
nodessection can configure the same Ansible variables that are configured in thenodeTemplatesection. Where an Ansible variable is configured for both a specific node and within thenodeTemplatesection, the node-specific values override those from thenodeTemplatesection. -
You do not need to replicate all the
nodeTemplateAnsible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node. -
Many
ansibleVarsincludeedpmin the name, which stands for "External Data Plane Management".
-
Nodes defined within the
edpm_network_config_nmstate- Sets theos-net-configprovider tonmstate. The default value istrue. Change it tofalseonly if a specific limitation of thenmstateprovider requires you to use theifcfgprovider. For more information on advantages and limitations of thenmstateprovider, see https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/planning_your_deployment/plan-networks_planning#plan-os-net-config_plan-network in Planning your deployment.For information about the properties you can use to configure node attributes, see
OpenStackDataPlaneNodeSetCR properties.
-
In the
servicessection, add the list of services that will run on the data plane node. Ensure that you replacenovawithnova-custom-sriov, ornova-custom-ovsdpdk, or both:... services: - bootstrap - download-cache - reboot-os - configure-ovs-dpdk - configure-network - validate-network - install-os - configure-os - ssh-known-hosts - run-os - install-certs - ovn - neutron-ovn - neutron-ovn-igmp - neutron-metadata - libvirt - nova-custom-sriov - nova-custom-ovsdpdk - telemetry ...-
Save the
openstack_unprovisioned_node_set.yamldefinition file. Create the data plane resources:
$ oc create -f openstack_unprovisioned_node_set.yaml -n openstackVerify that the data plane resources have been created:
$ oc get openstackdataplanenodeset -n openstack NAME STATUS MESSAGE openstack-data-plane False Deployment not startedFor information on the meaning of the returned status, see Data plane conditions and states.
Verify that the
Secretresource was created for the node set:$ oc get secret -n openstack | grep openstack-data-plane dataplanenodeset-openstack-data-plane Opaque 1 3m50sVerify the services were created:
$ oc get openstackdataplaneservice -n openstack NAME AGE configure-network 6d7h configure-os 6d6h install-os 6d6h run-os 6d6h validate-network 6d6h ovn 6d6h libvirt 6d6h nova 6d6h telemetry 6d6h
10.6.1. Example OpenStackDataPlaneNodeSet CR for unprovisioned nodes Copia collegamentoCollegamento copiato negli appunti!
The following example OpenStackDataPlaneNodeSet CR creates a node set from unprovisioned Compute nodes with some node-specific configuration. The unprovisioned Compute nodes are provisioned when the node set is created. The example includes optional fields. Review the example and update the optional fields to the correct values for your environment or remove them before using the example in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment.
Update the name of the OpenStackDataPlaneNodeSet CR in this example to a name that reflects the nodes in the set. The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters.
The following variables are autogenerated from IPAM and DNS and are not provided by the user:
-
ctlplane_dns_nameservers -
dns_search_domains -
ctlplane_host_routes
apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
metadata:
name: openstack-data-plane
namespace: openstack
spec:
env:
- name: ANSIBLE_FORCE_COLOR
value: "True"
networkAttachments:
- ctlplane
preProvisioned: false
baremetalSetTemplate:
deploymentSSHSecret: dataplane-ansible-ssh-private-key-secret
bmhNamespace: openstack
cloudUserName: cloud-admin
bmhLabelSelector:
app: openstack
ctlplaneInterface: enp1s0
nodeTemplate:
ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret
extraMounts:
- extraVolType: Logs
volumes:
- name: ansible-logs
persistentVolumeClaim:
claimName: <pvc_name>
mounts:
- name: ansible-logs
mountPath: "/runner/artifacts"
managementNetwork: ctlplane
ansible:
ansibleUser: cloud-admin
ansiblePort: 22
ansibleVarsFrom:
- secretRef:
name: subscription-manager
- secretRef:
name: redhat-registry
ansibleVars:
timesync_ntp_servers:
- hostname: ntp.example.com
iburst: true
- hostname: ntp2.example.com
iburst: false
rhc_release: 9.4
rhc_repositories:
- {name: "*", state: disabled}
- {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled}
- {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled}
- {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled}
- {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled}
- {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled}
- {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled}
edpm_bootstrap_release_version_package: []
edpm_network_config_os_net_config_mappings:
edpm-compute-0:
nic1: 52:54:04:60:55:22
edpm-compute-1:
nic1: 52:54:04:60:55:22
neutron_physical_bridge_name: br-ex
neutron_public_interface_name: eth0
edpm_network_config_template: |
---
{% set mtu_list = [ctlplane_mtu] %}
{% for network in nodeset_networks %}
{{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
{%- endfor %}
{% set min_viable_mtu = mtu_list | max %}
network_config:
- type: ovs_bridge
name: {{ neutron_physical_bridge_name }}
mtu: {{ min_viable_mtu }}
use_dhcp: false
dns_servers: {{ ctlplane_dns_nameservers }}
domain: {{ dns_search_domains }}
addresses:
- ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
routes: {{ ctlplane_host_routes }}
members:
- type: interface
name: nic1
mtu: {{ min_viable_mtu }}
# force the MAC address of the bridge to this interface
primary: true
{% for network in nodeset_networks %}
- type: vlan
mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
addresses:
- ip_netmask:
{{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
{% endfor %}
nodes:
edpm-compute-0:
hostName: edpm-compute-0
networks:
- name: ctlplane
subnetName: subnet1
defaultRoute: true
fixedIP: 192.168.122.100
- name: internalapi
subnetName: subnet1
- name: storage
subnetName: subnet1
- name: tenant
subnetName: subnet1
ansible:
ansibleHost: 192.168.122.100
ansibleUser: cloud-admin
ansibleVars:
fqdn_internal_api: edpm-compute-0.example.com
bmhLabelSelector:
nodeName: edpm-compute-0
edpm-compute-1:
hostName: edpm-compute-1
networks:
- name: ctlplane
subnetName: subnet1
defaultRoute: true
fixedIP: 192.168.122.101
- name: internalapi
subnetName: subnet1
- name: storage
subnetName: subnet1
- name: tenant
subnetName: subnet1
ansible:
ansibleHost: 192.168.122.101
ansibleUser: cloud-admin
ansibleVars:
fqdn_internal_api: edpm-compute-1.example.com
bmhLabelSelector:
nodeName: edpm-compute-1
services:
- bootstrap
- download-cache
- reboot-os
- configure-ovs-dpdk
- configure-network
- validate-network
- install-os
- configure-os
- ssh-known-hosts
- run-os
- install-certs
- ovn
- neutron-ovn
- neutron-ovn-igmp
- neutron-metadata
- libvirt
- nova-custom-sriov
- nova-custom-ovsdpdk
- telemetry
10.7. OpenStackDataPlaneNodeSet CR spec properties Copia collegamentoCollegamento copiato negli appunti!
The following sections detail the OpenStackDataPlaneNodeSet CR spec properties you can configure.
10.7.1. nodeTemplate Copia collegamentoCollegamento copiato negli appunti!
Defines the common attributes for the nodes in this OpenStackDataPlaneNodeSet. You can override these common attributes in the definition for each individual node.
| Field | Description |
|---|---|
|
| Name of the private SSH key secret that contains the private SSH key for connecting to nodes. Secret name format: Secret.data.ssh-privatekey For more information, see Creating an SSH authentication secret.
Default: |
|
|
Name of the network to use for management (SSH/Ansible). Default: |
|
|
Network definitions for the |
|
|
Ansible configuration options. For more information, see |
|
| The files to mount into an Ansible Execution Pod. |
|
|
UserData configuration for the |
|
|
NetworkData configuration for the |
10.7.2. nodes Copia collegamentoCollegamento copiato negli appunti!
Defines the node names and node-specific attributes for the nodes in this OpenStackDataPlaneNodeSet. Overrides the common attributes defined in the nodeTemplate.
| Field | Description |
|---|---|
|
|
Ansible configuration options. For more information, see |
|
| The files to mount into an Ansible Execution Pod. |
|
| The node name. |
|
| Name of the network to use for management (SSH/Ansible). |
|
| NetworkData configuration for the node. |
|
| Instance networks. |
|
| Node-specific user data. |
10.7.3. ansible Copia collegamentoCollegamento copiato negli appunti!
Defines the group of Ansible configuration options.
| Field | Description |
|---|---|
|
|
The user associated with the secret you created in Creating the data plane secrets. Default: |
|
| SSH host for the Ansible connection. |
|
| SSH port for the Ansible connection. |
|
|
The Ansible variables that customize the set of nodes. You can use this property to configure any custom Ansible variable, including the Ansible variables available for each Note
The |
|
|
A list of sources to populate Ansible variables from. Values defined by an |
10.7.4. ansibleVarsFrom Copia collegamentoCollegamento copiato negli appunti!
Defines the list of sources to populate Ansible variables from.
| Field | Description |
|---|---|
|
|
An optional identifier to prepend to each key in the |
|
|
The |
|
|
The |
10.8. Network interface configuration options Copia collegamentoCollegamento copiato negli appunti!
Use the following tables to understand the available options for configuring network interfaces for Red Hat OpenStack Services on OpenShift (RHOSO) environments.
Linux bridges are not supported in RHOSO. Instead, use methods such as Linux bonds and dedicated NICs for RHOSO traffic.
10.8.1. interface Copia collegamentoCollegamento copiato negli appunti!
Defines a single network interface. The network interface name uses either the actual interface name (eth0, eth1, enp0s25) or a set of numbered interfaces (nic1, nic2, nic3). The network interfaces of hosts within a role do not have to be exactly the same when you use numbered interfaces such as nic1 and nic2, instead of named interfaces such as eth0 and eno2. For example, one host might have interfaces em1 and em2, while another has eno1 and eno2, but you can refer to the NICs of both hosts as nic1 and nic2.
The order of numbered interfaces corresponds to the order of named network interface types:
ethXinterfaces, such aseth0,eth1, and so on.Names appear in this format when consistent device naming is turned off in
udev.enoXandemXinterfaces, such aseno0,eno1,em0,em1, and so on.These are usually on-board interfaces.
enXand any other interfaces, sorted alpha numerically, such asenp3s0,enp3s1,ens3, and so on.These are usually add-on interfaces.
The numbered NIC scheme includes only live interfaces, for example, if the interfaces have a cable attached to the switch. If you have some hosts with four interfaces and some with six interfaces, use nic1 to nic4 and attach only four cables on each host.
| Option | Default | Description |
|---|---|---|
|
|
Name of the interface. The network interface | |
|
| False | Use DHCP to get an IP address. |
|
| False | Use DHCP to get a v6 IP address. |
|
| A list of IP addresses assigned to the interface. | |
|
| A list of routes assigned to the interface. For more information, see Section 10.8.7, “routes”. | |
|
| 1500 | The maximum transmission unit (MTU) of the connection. |
|
| False |
Defines the interface as the primary interface. Required only when the |
|
| False | Write the device alias configuration instead of the system names. |
|
| None | Arguments that you want to pass to the DHCP client. |
|
| None | List of DNS servers that you want to use for the interface. |
|
|
Set this option to |
- Example
... edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: interface name: nic2 ...
10.8.2. vlan Copia collegamentoCollegamento copiato negli appunti!
Defines a VLAN. Use the VLAN ID and subnet passed from the parameters section.
vlanoptions
| Option | Default | Description |
|---|---|---|
| vlan_id | The VLAN ID. | |
| device | The parent device to attach the VLAN. Use this parameter when the VLAN is not a member of an OVS bridge. For example, use this parameter to attach the VLAN to a bonded interface device. | |
| use_dhcp | False | Use DHCP to get an IP address. |
| use_dhcpv6 | False | Use DHCP to get a v6 IP address. |
| addresses | A list of IP addresses assigned to the VLAN. | |
| routes | A list of routes assigned to the VLAN. For more information, see Section 10.8.7, “routes”. | |
| mtu | 1500 | The maximum transmission unit (MTU) of the connection. |
| primary | False | Defines the VLAN as the primary interface. |
| persist_mapping | False | Write the device alias configuration instead of the system names. |
| dhclient_args | None | Arguments that you want to pass to the DHCP client. |
| dns_servers | None | List of DNS servers that you want to use for the VLAN. |
- Example
... edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: ... - type: vlan device: nic{{ loop.index + 1 }} mtu: {{ lookup(vars, networks_lower[network] ~ _mtu) }} vlan_id: {{ lookup(vars, networks_lower[network] ~ _vlan_id) }} addresses: - ip_netmask: {{ lookup(vars, networks_lower[network] ~ _ip) }}/{{ lookup(vars, networks_lower[network] ~ _cidr) }} routes: {{ lookup(vars, networks_lower[network] ~ _host_routes) }} ...- Example - creating a VLAN on an
ovs_bridge To create a VLAN on an
ovs_bridge, you must place the VLAN configuration under thememberssection:... network_config: - type: ovs_bridge name: br0 use_dhcp: false members: - type: interface name: nic5 - type: vlan vlan_id: 138 use_dhcp: false ...- Example - creating a VLAN on an
ovs_user_bridge To create a VLAN on an
ovs_user_bridge, you must place the VLAN configuration under thememberssection. The members must be either anovs_dpdk_bondor andovs_dpdk_port:... network_config: -type: ovs_user_bridge name: br-link members: -type: ovs_dpdk_bond name: dpdkbond0 mtu: 9000 rx_queue: 4 members: -type: ovs_dpdk_port name: dpdk0 members: -type: interface name: nic2 -type: ovs_dpdk_port name: dpdk1 members: -type: interface name: nic3 -type: vlan vlan_id:138 use_dhcp: false ...
10.8.3. ovs_bridge Copia collegamentoCollegamento copiato negli appunti!
Defines a bridge in Open vSwitch (OVS), which connects multiple interface, ovs_bond, and vlan objects together.
The network interface type, ovs_bridge, takes a parameter name.
Placing Control group networks on the ovs_bridge interface can cause down time. The OVS bridge connects to the Networking service (neutron) server to obtain configuration data. If the OpenStack control traffic, typically the Control Plane and Internal API networks, is placed on an OVS bridge, then connectivity to the neutron server is lost whenever you upgrade OVS, or the OVS bridge is restarted by the admin user or process. If downtime is not acceptable in these circumstances, then you must place the Control group networks on a separate interface or bond rather than on an OVS bridge:
- You can achieve a minimal setting when you put the Internal API network on a VLAN on the provisioning interface and the OVS bridge on a second interface.
- To implement bonding, you need at least two bonds (four network interfaces). Place the control group on a Linux bond. If the switch does not support LACP fallback to a single interface for PXE boot, then this solution requires at least five NICs.
If you have multiple bridges, you must use distinct bridge names other than accepting the default name of bridge_name. If you do not use distinct names, then during the converge phase, two network bonds are placed on the same bridge.
ovs_bridgeoptions
| Option | Default | Description |
|---|---|---|
| name | Name of the bridge. | |
| use_dhcp | False | Use DHCP to get an IP address. |
| use_dhcpv6 | False | Use DHCP to get a v6 IP address. |
| addresses | A list of IP addresses assigned to the bridge. | |
| routes | A list of routes assigned to the bridge. For more information, see Section 10.8.7, “routes”. | |
| mtu | 1500 | The maximum transmission unit (MTU) of the connection. |
| members | A sequence of interface, VLAN, and bond objects that you want to use in the bridge. | |
| ovs_options | A set of options to pass to OVS when creating the bridge. | |
| ovs_extra | A set of options to to set as the OVS_EXTRA parameter in the network configuration file of the bridge. | |
| defroute | True |
Use a default route provided by the DHCP service. Only applies when you enable |
| persist_mapping | False | Write the device alias configuration instead of the system names. |
| dhclient_args | None | Arguments that you want to pass to the DHCP client. |
| dns_servers | None | List of DNS servers that you want to use for the bridge. |
- Example
... edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: br-bond dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} members: - type: ovs_bond name: bond1 mtu: {{ min_viable_mtu }} ovs_options: {{ bound_interface_ovs_options }} members: - type: interface name: nic2 mtu: {{ min_viable_mtu }} primary: true - type: interface name: nic3 mtu: {{ min_viable_mtu }} ...
10.8.4. Network interface bonding Copia collegamentoCollegamento copiato negli appunti!
You can bundle multiple physical NICs together to form a single logical channel known as a bond. You can configure bonds to provide redundancy for high availability systems or increased throughput.
Red Hat OpenStack Platform supports Open vSwitch (OVS) kernel bonds, OVS-DPDK bonds, and Linux kernel bonds.
| Bond type | Type value | Allowed bridge types | Allowed members |
|---|---|---|---|
| OVS kernel bonds |
|
|
|
| OVS-DPDK bonds |
|
|
|
| Linux kernel bonds |
|
|
|
Do not combine ovs_bridge and ovs_user_bridge on the same node.
ovs_bondDefines a bond in Open vSwitch (OVS) to join two or more
interfacestogether. This helps with redundancy and increases bandwidth.Expand Table 10.7. ovs_bond options Option Default Description name
Name of the bond.
use_dhcp
False
Use DHCP to get an IP address.
use_dhcpv6
False
Use DHCP to get a v6 IP address.
addresses
A list of IP addresses assigned to the bond.
routes
A list of routes assigned to the bond. For more information, see Section 10.8.7, “routes”.
mtu
1500
The maximum transmission unit (MTU) of the connection.
primary
False
Defines the interface as the primary interface.
members
A sequence of interface objects that you want to use in the bond.
ovs_options
A set of options to pass to OVS when creating the bond. For more information, see Table 10.8, “
ovs_optionsparameters for OVS bonds”.ovs_extra
A set of options to set as the OVS_EXTRA parameter in the network configuration file of the bond.
defroute
True
Use a default route provided by the DHCP service. Only applies when you enable
use_dhcporuse_dhcpv6.persist_mapping
False
Write the device alias configuration instead of the system names.
dhclient_args
None
Arguments that you want to pass to the DHCP client.
dns_servers
None
List of DNS servers that you want to use for the bond.
Expand Table 10.8. ovs_options parameters for OVS bonds ovs_optionDescription bond_mode=balance-slbSource load balancing (slb) balances flows based on source MAC address and output VLAN, with periodic rebalancing as traffic patterns change. When you configure a bond with the
balance-slbbonding option, there is no configuration required on the remote switch. The Networking service (neutron) assigns each source MAC and VLAN pair to a link and transmits all packets from that MAC and VLAN through that link. A simple hashing algorithm based on source MAC address and VLAN number is used, with periodic rebalancing as traffic patterns change. Thebalance-slbmode is similar to mode 2 bonds used by the Linux bonding driver. You can use this mode to provide load balancing even when the switch is not configured to use LACP.bond_mode=active-backupWhen you configure a bond using
active-backupbond mode, the Networking service keeps one NIC in standby. The standby NIC resumes network operations when the active connection fails. Only one MAC address is presented to the physical switch. This mode does not require switch configuration, and works when the links are connected to separate switches. This mode does not provide load balancing.lacp=[active | passive | off]Controls the Link Aggregation Control Protocol (LACP) behavior. Only certain switches support LACP. If your switch does not support LACP, use
bond_mode=balance-slborbond_mode=active-backup.other-config:lacp-fallback-ab=trueSet active-backup as the bond mode if LACP fails.
other_config:lacp-time=[fast | slow]Set the LACP heartbeat to one second (fast) or 30 seconds (slow). The default is slow.
other_config:bond-detect-mode=[miimon | carrier]Set the link detection to use miimon heartbeats (miimon) or monitor carrier (carrier). The default is carrier.
other_config:bond-miimon-interval=100If using miimon, set the heartbeat interval (milliseconds).
bond_updelay=1000Set the interval (milliseconds) that a link must be up to be activated to prevent flapping.
other_config:bond-rebalance-interval=10000Set the interval (milliseconds) that flows are rebalancing between bond members. Set this value to zero to disable flow rebalancing between bond members.
- Example - OVS bond
... edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: ... members: - type: ovs_bond name: bond1 mtu: {{ min_viable_mtu }} ovs_options: {{ bond_interface_ovs_options }} members: - type: interface name: nic2 mtu: {{ min_viable_mtu }} primary: true - type: interface name: nic3 mtu: {{ min_viable_mtu }}- Example - OVS DPDK bond
In this example, a bond is created as part of an OVS user space bridge:
edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: ... members: - type: ovs_user_bridge name: br-dpdk0 members: - type: ovs_dpdk_bond name: dpdkbond0 rx_queue: {{ num_dpdk_interface_rx_queues }} members: - type: ovs_dpdk_port name: dpdk0 members: - type: interface name: nic4 - type: ovs_dpdk_port name: dpdk1 members: - type: interface name: nic5
10.8.5. LACP with OVS bonding modes Copia collegamentoCollegamento copiato negli appunti!
You can use Open vSwitch (OVS) bonds with the optional Link Aggregation Control Protocol (LACP). LACP is a negotiation protocol that creates a dynamic bond for load balancing and fault tolerance.
Use the following table to understand support compatibility for OVS kernel and OVS-DPDK bonded interfaces in conjunction with LACP options.
Do not use OVS bonds on control and storage networks. Instead, use Linux bonds with VLAN and LACP.
If you use OVS bonds, and restart the OVS or the neutron agent for updates, hot fixes, and other events, the control plane can be disrupted.
| Objective | OVS bond mode | Compatible LACP options | Notes |
| High availability (active-passive) |
|
| |
| Increased throughput (active-active) |
|
|
|
|
|
|
|
10.8.6. linux_bond Copia collegamentoCollegamento copiato negli appunti!
Defines a Linux bond that joins two or more interfaces together. This helps with redundancy and increases bandwidth. Ensure that you include the kernel-based bonding options in the bonding_options parameter.
| Option | Default | Description |
|---|---|---|
| name | Name of the bond. | |
| use_dhcp | False | Use DHCP to get an IP address. |
| use_dhcpv6 | False | Use DHCP to get a v6 IP address. |
| addresses | A list of IP addresses assigned to the bond. | |
| routes | A list of routes assigned to the bond. See Section 10.8.7, “routes”. | |
| mtu | 1500 | The maximum transmission unit (MTU) of the connection. |
| members | A sequence of interface objects that you want to use in the bond. | |
| bonding_options |
A set of options when creating the bond. See | |
| defroute | True |
Use a default route provided by the DHCP service. Only applies when you enable |
| persist_mapping | False | Write the device alias configuration instead of the system names. |
| dhclient_args | None | Arguments that you want to pass to the DHCP client. |
| dns_servers | None | List of DNS servers that you want to use for the bond. |
bonding_optionsparameters for Linux bonds-
The
bonding_optionsparameter sets the specific bonding options for the Linux bond. See the Linux bonding examples that follow this table:
bonding_options | Description |
|---|---|
|
|
Sets the bonding mode, which in the example is |
|
| Defines whether LACP packets are sent every 1 second, or every 30 seconds. |
|
| Defines the minimum amount of time that an interface must be active before it is used for traffic. This minimum configuration helps to mitigate port flapping outages. |
|
| The interval in milliseconds that is used for monitoring the port state using the MIIMON functionality of the driver. |
- Example - Linux bond
... edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: linux_bond name: bond1 mtu: {{ min_viable_mtu }} bonding_options: "mode=802.3ad lacp_rate=fast updelay=1000 miimon=100 xmit_hash_policy=layer3+4" members: type: interface name: ens1f0 mtu: {{ min_viable_mtu }} primary: true type: interface name: ens1f1 mtu: {{ min_viable_mtu }} ...- Example - Linux bond: bonding two interfaces
... edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: linux_bond name: bond1 members: - type: interface name: nic2 - type: interface name: nic3 bonding_options: "mode=802.3ad lacp_rate=[fast|slow] updelay=1000 miimon=100" ...- Example - Linux bond set to
active-backupmode with one VLAN .... edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: linux_bond name: bond_api bonding_options: "mode=active-backup" use_dhcp: false dns_servers: get_param: DnsServers members: - type: interface name: nic3 primary: true - type: interface name: nic4 - type: vlan vlan_id: {{ lookup(vars, networks_lower[network] ~ _vlan_id) }} device: bond_api addresses: - ip_netmask: get_param: InternalApiIpSubnet- Example - Linux bond on OVS bridge
In this example, the bond is set to
802.3adwith LACP mode and one VLAN:... edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: br-tenant use_dhcp: false mtu: 9000 members: - type: linux_bond name: bond_tenant bonding_options: "mode=802.3ad updelay=1000 miimon=100" use_dhcp: false dns_servers: get_param: DnsServers members: - type: interface name: p1p1 primary: true - type: interface name: p1p2 - type: vlan vlan_id: {get_param: TenantNetworkVlanID} addresses: - ip_netmask: {get_param: TenantIpSubnet} ...
10.8.7. routes Copia collegamentoCollegamento copiato negli appunti!
Defines a list of routes to apply to a network interface, VLAN, bridge, or bond.
| Option | Default | Description |
|---|---|---|
| ip_netmask | None | IP and netmask of the destination network. |
| default | False |
Sets this route to a default route. Equivalent to setting |
| next_hop | None | The IP address of the router used to reach the destination network. |
- Example - routes
... edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: br-tenant ... routes: {{ [ctlplane_host_routes] | flatten | unique }} ...
10.9. edpm-ansible variables for SR-IOV and OVS-DPDK Copia collegamentoCollegamento copiato negli appunti!
This section describes how OVS-DPDK and SR-IOV uses edpm-ansible variables to configure the CPU and memory for optimum performance on Red Hat OpenStack Services on OpenShift (RHOSO) data plane nodes. Use this information to evaluate the hardware support on your Compute nodes and how to partition the hardware to optimize your OVS-DPDK and SR-IOV deployments.
Always pair CPU sibling threads, or logical CPUs, together in the physical core when allocating CPU cores.
For details on how to determine the CPU and NUMA nodes on your Compute nodes, see Discovering your NUMA node topology. Use this information to map CPU and other parameters to support the host, guest instance, and OVS-DPDK process needs.
10.9.1. Data plane (EDPM) Ansible variables Copia collegamentoCollegamento copiato negli appunti!
The following variables are part of data plane (EDPM) Ansible roles that you use in a custom resource (CR) to configure the CPU and memory for optimum performance on Red Hat OpenStack Services on OpenShift (RHOSO) data plane nodes for OVS-DPDK and SR-IOV:
- edpm_ovs_dpdk
-
Enables you to add, modify, and delete OVS-DPDK configurations, by using values defined in the OVS-DPDK
edpmAnsible variables. - edpm_ovs_dpdk_pmd_core_list
- Provides the CPU cores that are used for the DPDK poll mode drivers (PMD). Choose CPU cores that are associated with the local NUMA nodes of the DPDK interfaces.
- edpm_ovs_dpdk_enable_tso
-
Enables (
true) or disables (false) the TCP segmentation offloading (TSO) for DPDK feature. The default isfalse. - edpm_tuned_profile
-
Name of the custom TuneD profile. The default value is
throughput-performance. - edpm_tuned_isolated_cores
- A set of CPU cores isolated from the host processes.
- edpm_ovs_dpdk_socket_memory
Specifies the amount of memory in MB to pre-allocate from the hugepage pool, per NUMA node.
dpm_ovs_dpdk_socket_memoryis theother_config:dpdk-socket-memvalue in OVS:- Provide as a comma-separated list.
- For a NUMA node without a DPDK NIC, use the static value of 1024MB (1GB).
Calculate the edpm_ovs_dpdk_socket_memory value from the MTU value of each NIC on the NUMA node. The following equation approximates the value:
MEMORY_REQD_PER_MTU = (ROUNDUP_PER_MTU + 800) * (4096 * 64) Bytes-
800is the overhead value. -
4096 * 64is the number of packets in the mempool.
-
-
Add the
MEMORY_REQD_PER_MTUfor each of the MTU values set on the NUMA node and add another512MBas buffer. Round the value up to a multiple of1024.
- edpm_ovs_dpdk_memory_channels
Maps memory channels in the CPU per NUMA node. edpm_ovs_dpdk_memory_channels is the other_config:dpdk-extra="-n <value>" value in OVS:
-
Use
dmidecode -t memoryor your hardware manual to determine the number of memory channels available. -
Use
ls /sys/devices/system/node/node* -dto determine the number of NUMA nodes. - Divide the number of memory channels available by the number of NUMA nodes.
-
Use
- edpm_ovs_dpdk_vhost_postcopy_support
-
Enable or disable OVS-DPDK vhost post-copy support. Setting this to
trueenables post-copy support for all vhost user client ports. - edpm_nova_libvirt_qemu_group
-
Set
edpm_nova_libvirt_qemu_grouptohugetlbfs`so that the `ovs-vswitchdandqemuprocesses can access the shared huge pages and UNIX socket that configures thevirtio-net device. This value is role-specific and should be applied to any role leveraging OVS-DPDK. - edpm_ovn_bridge_mappings
List of bridge and DPDK ports mappings.
Example:
edpm_ovn_bridge_mappings: - "datacentre:br-ex"- edpm_kernel_args
-
Provides multiple kernel arguments to
/etc/default/grubfor the compute nodes at boot time.
10.9.2. Configuration map parameters Copia collegamentoCollegamento copiato negli appunti!
You can set the following parameters in the ConfigMap section of a custom resource (CR) to configure the CPU and memory for optimum performance on Red Hat OpenStack Services on OpenShift (RHOSO) data plane nodes for OVS-DPDK and SR-IOV:
- cpu_shared_set
-
List or range of host CPU cores used to determine the host CPUs that instance emulator threads should be offloaded to for instances configured with the share emulator thread policy (
hw::emulator_threads_policy=share). - cpu_dedicated_set
A comma-separated list or range of physical host CPU numbers to which processes for pinned instance CPUs can be scheduled.
-
Exclude all cores from the
edpm_ovs_dpdk_pmd_core_list. - Include all remaining cores.
- Pair the sibling threads together.
-
Exclude all cores from the
- reserved_host_memory_mb
-
Reserves memory in MB for tasks on the host. Use the static value of
4096MB.
10.10. Example custom network interfaces for NFV Copia collegamentoCollegamento copiato negli appunti!
The following examples illustrate how you can use a template to customize network interfaces for NFV in Red Hat OpenStack Services on OpenShift (RHOSO) environments.
10.10.1. Example template - non-partitioned NIC Copia collegamentoCollegamento copiato negli appunti!
This template example configures the RHOSO networks on a NIC that is not partitioned.
apiVersion: v1
data:
25-igmp.conf: |
[ovs]
igmp_snooping_enable = True
kind: ConfigMap
metadata:
name: neutron-igmp
namespace: openstack
---
apiVersion: v1
data:
25-cpu-pinning-nova.conf: |
[DEFAULT]
reserved_host_memory_mb = 4096
[compute]
cpu_shared_set = "0,20,1,21"
cpu_dedicated_set = "8-19,28-39"
[neutron]
physnets = dpdkdata1
[neutron_physnet_dpdkdata1]
numa_nodes = 1
[libvirt]
cpu_power_management=false
kind: ConfigMap
metadata:
name: ovs-dpdk-sriov-cpu-pinning-nova
namespace: openstack
---
apiVersion: v1
data:
03-sriov-nova.conf: |
[pci]
device_spec = {"address": "0000:05:00.2", "physical_network":"sriov-1", "trusted":"true"}
device_spec = {"address": "0000:05:00.3", "physical_network":"sriov-2", "trusted":"true"}
device_spec = {"address": "0000:07:00.0", "physical_network":"sriov-3", "trusted":"true"}
device_spec = {"address": "0000:07:00.1", "physical_network":"sriov-4", "trusted":"true"}
kind: ConfigMap
metadata:
name: sriov-nova
namespace: openstack
---
apiVersion: v1
data:
NodeRootPassword: cmVkaGF0Cg==
kind: Secret
metadata:
name: baremetalset-password-secret
namespace: openstack
type: Opaque
---
apiVersion: v1
data:
authorized_keys: ZWNkc2Etc2hhMi1uaXN0cDUyMSBBQUFBRTJWalpITmhMWE5vWVRJdGJtbHpkSEExTWpFQUFBQUlibWx6ZEhBMU1qRUFBQUNGQkFBVFdweE5LNlNYTEo0dnh2Y0F4N0t4c3FLenI0a3pEalRpT0dQa3pyZWZnTjdVcmo2RUZPUXlBRWk5cXNnYkRVYXp0MktpdzJqc3djbG5TYW1zUDE0V2x3RkN2a1NuU1o4cTZwWGJTbGpNa3Z1R3FiVXZoSTVxTVlMTDNlRWpyU21nNDlWcTBWZkdFQmxIWUx6TGFncVBlN1FKR0NCMGlWTVk5b3N0TFdPM1NKbXVuZz09IGNpZm13X3JlcHJvZHVjZXJfa2V5Cg==
ssh-privatekey: LS0tLS1CRUdJTiBPUEVOU1NIIFBSSVZBVEUgS0VZLS0tLS0KYjNCbGJuTnphQzFyWlhrdGRqRUFBQUFBQkc1dmJtVUFBQUFFYm05dVpRQUFBQUFBQUFBQkFBQUFyQUFBQUJObFkyUnpZUwoxemFHRXlMVzVwYzNSd05USXhBQUFBQ0c1cGMzUndOVEl4QUFBQWhRUUFFMXFjVFN1a2x5eWVMOGIzQU1leXNiS2lzNitKCk13NDA0amhqNU02M240RGUxSzQraEJUa01nQkl2YXJJR3cxR3M3ZGlvc05vN01ISlowbXByRDllRnBjQlFyNUVwMG1mS3UKcVYyMHBZekpMN2hxbTFMNFNPYWpHQ3k5M2hJNjBwb09QVmF0Rlh4aEFaUjJDOHkyb0tqM3UwQ1JnZ2RJbFRHUGFMTFMxagp0MGlacnA0QUFBRVl0cGNtdHJhWEpyWUFBQUFUWldOa2MyRXRjMmhoTWkxdWFYTjBjRFV5TVFBQUFBaHVhWE4wY0RVeU1RCkFBQUlVRUFCTmFuRTBycEpjc25pL0c5d0RIc3JHeW9yT3ZpVE1PTk9JNFkrVE90NStBM3RTdVBvUVU1RElBU0wycXlCc04KUnJPM1lxTERhT3pCeVdkSnFhdy9YaGFYQVVLK1JLZEpueXJxbGR0S1dNeVMrNGFwdFMrRWptb3hnc3ZkNFNPdEthRGoxVwpyUlY4WVFHVWRndk10cUNvOTd0QWtZSUhTSlV4ajJpeTB0WTdkSW1hNmVBQUFBUWdHTWZobWFSblZFcnhjZ2Z6aVRpdzFnClBjYXBBV21TMHh5dDNyclhoSnExd0pRMys3ZFp0Y3l0alg5VVVuNnh0NlE1M0JTT1ZvaWR2L2pZK2krYytNVVhUZ0FBQUIKUmphV1p0ZDE5eVpYQnliMlIxWTJWeVgydGxlUUVDQXdRRkJnPT0KLS0tLS1FTkQgT1BFTlNTSCBQUklWQVRFIEtFWS0tLS0tCg==
ssh-publickey: ZWNkc2Etc2hhMi1uaXN0cDUyMSBBQUFBRTJWalpITmhMWE5vWVRJdGJtbHpkSEExTWpFQUFBQUlibWx6ZEhBMU1qRUFBQUNGQkFBVFdweE5LNlNYTEo0dnh2Y0F4N0t4c3FLenI0a3pEalRpT0dQa3pyZWZnTjdVcmo2RUZPUXlBRWk5cXNnYkRVYXp0MktpdzJqc3djbG5TYW1zUDE0V2x3RkN2a1NuU1o4cTZwWGJTbGpNa3Z1R3FiVXZoSTVxTVlMTDNlRWpyU21nNDlWcTBWZkdFQmxIWUx6TGFncVBlN1FKR0NCMGlWTVk5b3N0TFdPM1NKbXVuZz09IGNpZm13X3JlcHJvZHVjZXJfa2V5Cg==
kind: Secret
metadata:
name: dataplane-ansible-ssh-private-key-secret
namespace: openstack
type: Opaque
---
apiVersion: v1
data:
LibvirtPassword: MTIzNDU2Nzg=
kind: Secret
metadata:
name: libvirt-secret
namespace: openstack
type: Opaque
---
apiVersion: v1
data:
ssh-privatekey: LS0tLS1CRUdJTiBPUEVOU1NIIFBSSVZBVEUgS0VZLS0tLS0KYjNCbGJuTnphQzFyWlhrdGRqRUFBQUFBQkc1dmJtVUFBQUFFYm05dVpRQUFBQUFBQUFBQkFBQUFyQUFBQUJObFkyUnpZUwoxemFHRXlMVzVwYzNSd05USXhBQUFBQ0c1cGMzUndOVEl4QUFBQWhRUUFwWTlSRzV5a2pLR3p2c295dWlDZm1zakEwZkFYCmkvS0hQT3R3Zm9NZjRQZXpRSFFNOHFJZ0pGc0svaVlwNVJIWmNVQlcwVVBCNnBpazQ1L3k0QVF4bmVBQWRrN0JQbTc0dG8KSkxoVjY2U3pzV2pHR1NFdzVXVFBwVUVpaXdQMlNiL1l4dXloNWlLbUJyTE5SRWpYTEJvbjJJZWRBbEJMaC9FaGpkdFZjUwo5ZzczQ0tvQUFBRVFoeS9PODRjdnp2TUFBQUFUWldOa2MyRXRjMmhoTWkxdWFYTjBjRFV5TVFBQUFBaHVhWE4wY0RVeU1RCkFBQUlVRUFLV1BVUnVjcEl5aHM3N0tNcm9nbjVySXdOSHdGNHZ5aHp6cmNINkRIK0QzczBCMERQS2lJQ1JiQ3Y0bUtlVVIKMlhGQVZ0RkR3ZXFZcE9PZjh1QUVNWjNnQUhaT3dUNXUrTGFDUzRWZXVrczdGb3hoa2hNT1ZrejZWQklvc0Q5a20vMk1icwpvZVlpcGdheXpVUkkxeXdhSjlpSG5RSlFTNGZ4SVkzYlZYRXZZTzl3aXFBQUFBUWdEQ0lEdHFqZ0JNam8rbG1rRnhzV3NvCkxKOGxBSWF0a0ZTdDkxcGJHWWIrVFRnS0NSOGhqbXdjalNoRzFlNlRaZWZNTkc5TklzVlRYYjNjTkYvaThJTHV1UUFBQUEKNXViM1poSUcxcFozSmhkR2x2YmdFQ0F3UT0KLS0tLS1FTkQgT1BFTlNTSCBQUklWQVRFIEtFWS0tLS0tCg==
ssh-publickey: ZWNkc2Etc2hhMi1uaXN0cDUyMSBBQUFBRTJWalpITmhMWE5vWVRJdGJtbHpkSEExTWpFQUFBQUlibWx6ZEhBMU1qRUFBQUNGQkFDbGoxRWJuS1NNb2JPK3lqSzZJSitheU1EUjhCZUw4b2M4NjNCK2d4L2c5N05BZEF6eW9pQWtXd3IrSmlubEVkbHhRRmJSUThIcW1LVGpuL0xnQkRHZDRBQjJUc0UrYnZpMmdrdUZYcnBMT3hhTVlaSVREbFpNK2xRU0tMQS9aSnY5akc3S0htSXFZR3NzMUVTTmNzR2lmWWg1MENVRXVIOFNHTjIxVnhMMkR2Y0lxZz09IG5vdmEgbWlncmF0aW9uCg==
kind: Secret
metadata:
name: nova-migration-ssh-key
namespace: openstack
type: kubernetes.io/ssh-auth
---
apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
metadata:
name: openstack-edpm
namespace: openstack
spec:
baremetalSetTemplate:
bmhLabelSelector:
app: openstack
cloudUserName: cloud-admin
ctlplaneInterface: enp130s0f0
passwordSecret:
name: baremetalset-password-secret
namespace: openstack
provisioningInterface: enp5s0
env:
- name: ANSIBLE_FORCE_COLOR
value: "True"
networkAttachments:
- ctlplane
nodeTemplate:
ansible:
ansiblePort: 22
ansibleUser: cloud-admin
ansibleVars:
dns_search_domains: []
rhc_release: 9.4
rhc_repositories:
- {name: "*", state: disabled}
- {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled}
- {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled}
- {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled}
- {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled}
- {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled}
- {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled}
edpm_fips_mode: check
edpm_kernel_args: default_hugepagesz=1GB hugepagesz=1G hugepages=64 iommu=pt
intel_iommu=on tsx=off isolcpus=2-19,22-39
edpm_network_config_hide_sensitive_logs: false
edpm_network_config_os_net_config_mappings:
edpm-compute-0:
dmiString: system-product-name
id: PowerEdge R730
nic1: eno1
nic2: eno2
nic3: enp130s0f0
nic4: enp130s0f1
nic5: enp130s0f2
nic6: enp130s0f3
nic7: enp5s0f0
nic8: enp5s0f1
nic9: enp5s0f2
nic10: enp5s0f3
nic11: enp7s0f0np0
nic12: enp7s0f1np1
edpm-compute-1:
dmiString: system-product-name
id: PowerEdge R730
nic1: eno1
nic2: eno2
nic3: enp130s0f0
nic4: enp130s0f1
nic5: enp130s0f2
nic6: enp130s0f3
nic7: enp5s0f0
nic8: enp5s0f1
nic9: enp5s0f2
nic10: enp5s0f3
nic11: enp7s0f0np0
nic12: enp7s0f1np1
edpm_network_config_template: |
---
{% set mtu_list = [ctlplane_mtu] %}
{% for network in nodeset_networks %}
{{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }}
{%- endfor %}
{% set min_viable_mtu = mtu_list | max %}
network_config:
- type: interface
name: nic1
use_dhcp: false
- type: interface
name: nic2
use_dhcp: false
- type: linux_bond
name: bond_api
use_dhcp: false
bonding_options: "mode=active-backup"
dns_servers: {{ ctlplane_dns_nameservers }}
members:
- type: interface
name: nic3
primary: true
addresses:
- ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
routes:
- default: true
next_hop: {{ ctlplane_gateway_ip }}
- type: vlan
vlan_id: {{ lookup(vars, networks_lower[internalapi] ~ _vlan_id) }}
device: bond_api
addresses:
- ip_netmask: {{ lookup(vars, networks_lower[internalapi] ~ _ip) }}/{{ lookup(vars, networks_lower[internalapi] ~ _cidr) }}
- type: vlan
vlan_id: {{ lookup(vars, networks_lower[storage] ~ _vlan_id) }}
device: bond_api
addresses:
- ip_netmask: {{ lookup(vars, networks_lower[storage] ~ _ip) }}/{{ lookup(vars, networks_lower[storage] ~ _cidr) }}
- type: ovs_user_bridge
name: br-link0
use_dhcp: false
ovs_extra: "set port br-link0 tag={{ lookup(vars, networks_lower[tenant] ~ _vlan_id) }}"
addresses:
- ip_netmask: {{ lookup(vars, networks_lower[tenant] ~ _ip) }}/{{ lookup(vars, networks_lower[tenant] ~ _cidr) }}
mtu: {{ lookup(vars, networks_lower[tenant] ~ _mtu) }}
members:
- type: ovs_dpdk_bond
name: dpdkbond0
mtu: 9000
rx_queue: 2
ovs_extra: "set port dpdkbond0 bond_mode=balance-slb"
members:
- type: ovs_dpdk_port
name: dpdk0
members:
- type: interface
name: nic7
- type: ovs_dpdk_port
name: dpdk1
members:
- type: interface
name: nic8
- type: ovs_user_bridge
name: br-dpdk0
mtu: 9000
use_dhcp: false
members:
- type: ovs_dpdk_bond
name: dpdkbond1
mtu: 9000
rx_queue: 3
ovs_options: "bond_mode=balance-tcp lacp=active other_config:lacp-time=fast other-config:lacp-fallback-ab=true other_config:lb-output-action=true"
members:
- type: ovs_dpdk_port
name: dpdk2
members:
- type: interface
name: nic5
- type: ovs_dpdk_port
name: dpdk3
members:
- type: interface
name: nic6
- type: ovs_user_bridge
name: br-dpdk1
mtu: 9000
use_dhcp: false
members:
- type: ovs_dpdk_port
name: dpdk4
mtu: 9000
rx_queue: 3
members:
- type: interface
name: nic4
- type: sriov_pf
name: nic9
numvfs: 10
mtu: 9000
use_dhcp: false
promisc: true
- type: sriov_pf
name: nic10
numvfs: 10
mtu: 9000
use_dhcp: false
promisc: true
- type: sriov_pf
name: nic11
numvfs: 5
mtu: 9000
use_dhcp: false
promisc: true
- type: sriov_pf
name: nic12
numvfs: 5
mtu: 9000
use_dhcp: false
promisc: true
edpm_neutron_sriov_agent_SRIOV_NIC_physical_device_mappings: sriov-1:enp5s0f2,sriov-2:enp5s0f3,sriov-3:enp7s0f0np0,sriov-4:enp7s0f1np1
edpm_nodes_validation_validate_controllers_icmp: false
edpm_nodes_validation_validate_gateway_icmp: false
edpm_nova_libvirt_qemu_group: hugetlbfs
edpm_ovn_bridge_mappings:
- dpdkmgmt:br-link0
- dpdkdata0:br-dpdk0
- dpdkdata1:br-dpdk1
edpm_ovs_dpdk_memory_channels: "4"
edpm_ovs_dpdk_pmd_auto_lb: "true"
edpm_ovs_dpdk_pmd_core_list: 2,3,4,5,6,7,22,23,24,25,26,27
edpm_ovs_dpdk_pmd_improvement_threshold: "25"
edpm_ovs_dpdk_pmd_load_threshold: "70"
edpm_ovs_dpdk_pmd_rebal_interval: "2"
edpm_ovs_dpdk_socket_memory: 4096,4096
edpm_ovs_dpdk_vhost_postcopy_support: "true"
edpm_selinux_mode: enforcing
edpm_sshd_allowed_ranges:
- 192.168.122.0/24
edpm_sshd_configure_firewall: true
edpm_tuned_isolated_cores: 2-19,22-39
edpm_tuned_profile: cpu-partitioning-powersave
enable_debug: false
gather_facts: false
neutron_physical_bridge_name: br-access
neutron_public_interface_name: nic1
service_net_map:
nova_api_network: internalapi
nova_libvirt_network: internalapi
timesync_ntp_servers:
- hostname: clock.redhat.com
ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret
managementNetwork: ctlplane
networks:
- defaultRoute: true
name: ctlplane
subnetName: subnet1
- name: internalapi
subnetName: subnet1
- name: storage
subnetName: subnet1
- name: tenant
subnetName: subnet1
nodes:
edpm-compute-0:
hostName: compute-0
edpm-compute-1:
hostName: compute-1
preProvisioned: false
services:
- redhat
- bootstrap
- download-cache
- reboot-os
- configure-ovs-dpdk
- configure-network
- validate-network
- install-os
- configure-os
- ssh-known-hosts
- run-os
- install-certs
- ovn
- neutron-ovn-igmp
- neutron-metadata
- neutron-sriov
- libvirt
- nova-custom-ovsdpdksriov
- telemetry
---
apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneService
metadata:
name: neutron-ovn-igmp
namespace: openstack
spec:
caCerts: combined-ca-bundle
dataSources:
- configMapRef:
name: neutron-igmp
- secretRef:
name: neutron-ovn-agent-neutron-config
edpmServiceType: neutron-ovn
label: neutron-ovn-igmp
playbook: osp.edpm.neutron_ovn
tlsCerts:
default:
contents:
- dnsnames
- ips
issuer: osp-rootca-issuer-ovn
keyUsages:
- digital signature
- key encipherment
- client auth
networks:
- ctlplane
---
apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneService
metadata:
name: nova-custom-ovsdpdksriov
namespace: openstack
spec:
caCerts: combined-ca-bundle
dataSources:
- configMapRef:
name: ovs-dpdk-sriov-cpu-pinning-nova
- configMapRef:
name: sriov-nova
- secretRef:
name: nova-cell1-compute-config
- secretRef:
name: nova-migration-ssh-key
edpmServiceType: nova
label: nova-custom-ovsdpdksriov
playbook: osp.edpm.nova
tlsCerts:
default:
contents:
- dnsnames
- ips
issuer: osp-rootca-issuer-internal
networks:
- ctlplane
-
edpm-compute-n- Defines theedpm_network_config_os_net_config_mappingsvariable to map the actual NICs. You identify each NIC by specifying the MAC address or the device name on each compute node to the NIC ID that the RHOSOos-net-configtool uses which is typically, `nic`n. -
linux_bond- Creates a control-plane Linux bond for an isolated network. In this example, a Linux bond is created with mode active-backup onnic3andnic4. -
type: vlan- Assigns VLANs to Linux bonds. In this example, the VLAN ID of theinternalapiandstoragenetworks is assigned tobond-api. -
ovs_user_bridge- Sets a bridge with OVS-DPDK ports. In this example, an OVS user bridge is created with a DPDK bond that has two DPDK ports that correspond tonic7andnic8for the tenant network. A GENEVE tunnel is used. -
sriov_pf- Creates SR-IOV VFs. In this example, an interface type ofsriov_pfis configured as a physical function that the host can use. -
numvfs- Sets the number of VFs that are required.
10.10.2. Example template - partitioned NIC Copia collegamentoCollegamento copiato negli appunti!
This template example configures the RHOSO networks on a NIC that is partitioned. This example only shows the portion of the custom resource (CR) definition where the NIC is partitioned.
edpm_network_config_os_net_config_mappings:
dellr750:
dmiString: system-product-name
id: PowerEdge R750
nic1: eno8303
nic2: ens1f0
nic3: ens1f1
nic4: ens1f2
nic5: ens1f3
nic6: ens2f0np0
nic7: ens2f1np1
edpm_network_config_template: |
---
{% set mtu_list = [ctlplane_mtu] %}
{% for network in nodeset_networks %}
{{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }}
{%- endfor %}
{% set min_viable_mtu = mtu_list | max %}
network_config:
- type: interface
name: nic1
use_dhcp: false
- type: interface
name: nic2
use_dhcp: false
addresses:
- ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
routes:
- default: true
next_hop: {{ ctlplane_gateway_ip }}
- type: sriov_pf
name: nic3
mtu: 9000
numvfs: 5
use_dhcp: false
defroute: false
nm_controlled: true
hotplug: true
- type: sriov_pf
name: nic4
mtu: 9000
numvfs: 5
use_dhcp: false
defroute: false
nm_controlled: true
hotplug: true
- type: linux_bond
name: bond_api
use_dhcp: false
bonding_options: "mode=active-backup"
dns_servers: {{ ctlplane_dns_nameservers }}
members:
- type: sriov_vf
device: nic3
vfid: 0
vlan_id: {{ lookup(vars, networks_lower[internalapi] ~ _vlan_id) }}
- type: sriov_vf
device: nic4
vfid: 0
vlan_id: {{ lookup(vars, networks_lower[internalapi] ~ _vlan_id) }}
addresses:
- ip_netmask: {{ lookup(vars, networks_lower[internalapi] ~ _ip) }}/{{ lookup(vars, networks_lower[internalapi] ~ _cidr) }}
- type: linux_bond
name: storage_bond
use_dhcp: false
bonding_options: "mode=active-backup"
dns_servers: {{ ctlplane_dns_nameservers }}
members:
- type: sriov_vf
device: nic3
vfid: 1
vlan_id: {{ lookup(vars, networks_lower[storage] ~ _vlan_id) }}
- type: sriov_vf
device: nic4
vfid: 1
vlan_id: {{ lookup(vars, networks_lower[storage] ~ _vlan_id) }}
addresses:
- ip_netmask: {{ lookup(vars, networks_lower[storage] ~ _ip) }}/{{ lookup(vars, networks_lower[storage] ~ _cidr) }}
- type: linux_bond
name: mgmtst_bond
use_dhcp: false
bonding_options: "mode=active-backup"
dns_servers: {{ ctlplane_dns_nameservers }}
members:
- type: sriov_vf
device: nic3
vfid: 2
vlan_id: {{ lookup(vars, networks_lower[storagemgmt] ~ _vlan_id) }}
- type: sriov_vf
device: nic4
vfid: 2
vlan_id: {{ lookup(vars, networks_lower[storagemgmt] ~ _vlan_id) }}
addresses:
- ip_netmask: {{ lookup(vars, networks_lower[storagemgmt] ~ _ip) }}/{{ lookup(vars, networks_lower[storagemgmt] ~ _cidr) }}
- type: ovs_user_bridge
name: br-link0
use_dhcp: false
mtu: 9000
ovs_extra: "set port br-link0 tag={{ lookup(vars, networks_lower[tenant] ~ _vlan_id) }}"
addresses:
- ip_netmask: {{ lookup(vars, networks_lower[tenant] ~ _ip) }}/{{ lookup(vars, networks_lower[tenant] ~ _cidr) }}
members:
- type: ovs_dpdk_bond
name: dpdkbond0
mtu: 9000
rx_queue: 1
members:
- type: ovs_dpdk_port
name: dpdk0
members:
- type: sriov_vf
device: nic3
vfid: 3
- type: ovs_dpdk_port
name: dpdk1
members:
- type: sriov_vf
device: nic4
vfid: 3
- type: ovs_user_bridge
name: br-dpdk0
use_dhcp: false
mtu: 9000
rx_queue: 1
members:
- type: ovs_dpdk_port
name: dpdk2
members:
- type: interface
name: nic5
- type: sriov_pf
name: nic6
mtu: 9000
numvfs: 5
use_dhcp: false
defroute: false
- type: sriov_pf
name: nic7
mtu: 9000
numvfs: 5
use_dhcp: false
defroute: false
10.11. Deploying the data plane Copia collegamentoCollegamento copiato negli appunti!
You use the OpenStackDataPlaneDeployment CRD to configure the services on the data plane nodes and deploy the data plane. You control the execution of Ansible on the data plane by creating OpenStackDataPlaneDeployment custom resources (CRs). Each OpenStackDataPlaneDeployment CR models a single Ansible execution.
When the OpenStackDataPlaneDeployment successfully completes execution, it does not automatically execute the Ansible again, even if the OpenStackDataPlaneDeployment or related OpenStackDataPlaneNodeSet resources are changed. To start another Ansible execution, you must create another OpenStackDataPlaneDeployment CR.
Create an OpenStackDataPlaneDeployment (CR) that deploys each of your OpenStackDataPlaneNodeSet CRs.
Procedure
Create a file on your workstation named
openstack_data_plane_deploy.yamlto define theOpenStackDataPlaneDeploymentCR:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack-data-plane-
name- TheOpenStackDataPlaneDeploymentCR name must be unique, must consist of lower case alphanumeric characters,-(hyphen) or.(period), and must start and end with an alphanumeric character. Update the name in this example to a name that reflects the node sets in the deployment.
-
Add all the
OpenStackDataPlaneNodeSetCRs that you want to deploy.spec: nodeSets: - openstack-data-plane - <nodeSet_name> - ... - <nodeSet_name> services: ...-
Replace
<nodeSet_name>with the names of theOpenStackDataPlaneNodeSetCRs that you want to include in your data plane deployment, for example,openstack_preprovisioned_node_set.yamloropenstack_unprovisioned_node_set.yaml.
-
Replace
-
Save the
openstack_data_plane_deploy.yamldeployment file. Deploy the data plane:
$ oc create -f openstack_data_plane_deploy.yaml -n openstackYou can view the Ansible logs while the deployment executes:
$ oc get pod -l app=openstackansibleee -w $ oc logs -l app=openstackansibleee -f --max-log-requests 10Confirm that the data plane is deployed:
$ oc get openstackdataplanedeployment -n openstack- Sample output
NAME STATUS MESSAGE openstack-data-plane True Setup Complete
Repeat the
oc getcommand until you see theNodeSet Readymessage:$ oc get openstackdataplanenodeset -n openstack- Sample output
NAME STATUS MESSAGE openstack-data-plane True NodeSet ReadyFor information about the meaning of the returned status, see Data plane conditions and states.
If the status indicates that the data plane has not been deployed, then troubleshoot the deployment. For information, see Troubleshooting the data plane creation and deployment.
Map the Compute nodes to the Compute cell that they are connected to:
$ oc rsh nova-cell0-conductor-0 nova-manage cell_v2 discover_hosts --verboseIf you did not create additional cells, this command maps the Compute nodes to
cell1.
Verification
Access the remote shell for the
openstackclientpod and confirm that the deployed Compute nodes are visible on the control plane:$ oc rsh -n openstack openstackclient $ openstack hypervisor list
10.12. Data plane conditions and states Copia collegamentoCollegamento copiato negli appunti!
Each data plane resource has a series of conditions within their status subresource that indicates the overall state of the resource, including its deployment progress.
For an OpenStackDataPlaneNodeSet, until an OpenStackDataPlaneDeployment has been started and finished successfully, the Ready condition is False. When the deployment succeeds, the Ready condition is set to True. A subsequent deployment sets the Ready condition to False until the deployment succeeds, when the Ready condition is set to True.
| Condition | Description |
|---|---|
|
|
|
|
| "True": All setup tasks for a resource are complete. Setup tasks include verifying the SSH key secret, verifying other fields on the resource, and creating the Ansible inventory for each resource. Each service-specific condition is set to "True" when that service completes deployment. You can check the service conditions to see which services have completed their deployment, or which services failed. |
|
| "True": The NodeSet has been successfully deployed. |
|
| "True": The required inputs are available and ready. |
|
| "True": DNSData resources are ready. |
|
| "True": The IPSet resources are ready. |
|
| "True": Bare-metal nodes are provisioned and ready. |
| Status field | Description |
|---|---|
|
|
|
|
| |
|
|
| Condition | Description |
|---|---|
|
|
|
|
| "True": The data plane is successfully deployed. |
|
| "True": The required inputs are available and ready. |
|
|
"True": The deployment has succeeded for the named |
|
|
"True": The deployment has succeeded for the named |
| Status field | Description |
|---|---|
|
|
|
| Condition | Description |
|---|---|
|
| "True": The service has been created and is ready for use. "False": The service has failed to be created. |
10.13. Troubleshooting data plane creation and deployment Copia collegamentoCollegamento copiato negli appunti!
To troubleshoot a deployment when services are not deploying or operating correctly, you can check the job condition message for the service, and you can check the logs for a node set.
10.13.1. Checking the job condition message for a service Copia collegamentoCollegamento copiato negli appunti!
Each data plane deployment in the environment has associated services. Each of these services have a job condition message that matches the current status of the AnsibleEE job executing for that service. You can use this information to troubleshoot deployments when services are not deploying or operating correctly.
Procedure
Determine the name and status of all deployments:
$ oc get openstackdataplanedeploymentThe following example output shows two deployments currently in progress:
$ oc get openstackdataplanedeployment NAME NODESETS STATUS MESSAGE edpm-compute ["openstack-edpm-ipam"] False Deployment in progressRetrieve and inspect Ansible execution jobs.
The Kubernetes jobs are labelled with the name of the
OpenStackDataPlaneDeployment. You can list jobs for eachOpenStackDataPlaneDeploymentby using the label:$ oc get job -l openstackdataplanedeployment=edpm-compute NAME STATUS COMPLETIONS DURATION AGE bootstrap-edpm-compute-openstack-edpm-ipam Complete 1/1 78s 25h configure-network-edpm-compute-openstack-edpm-ipam Complete 1/1 37s 25h configure-os-edpm-compute-openstack-edpm-ipam Complete 1/1 66s 25h download-cache-edpm-compute-openstack-edpm-ipam Complete 1/1 64s 25h install-certs-edpm-compute-openstack-edpm-ipam Complete 1/1 46s 25h install-os-edpm-compute-openstack-edpm-ipam Complete 1/1 57s 25h libvirt-edpm-compute-openstack-edpm-ipam Complete 1/1 2m37s 25h neutron-metadata-edpm-compute-openstack-edpm-ipam Complete 1/1 61s 25h nova-edpm-compute-openstack-edpm-ipam Complete 1/1 3m20s 25h ovn-edpm-compute-openstack-edpm-ipam Complete 1/1 78s 25h run-os-edpm-compute-openstack-edpm-ipam Complete 1/1 33s 25h ssh-known-hosts-edpm-compute Complete 1/1 19s 25h telemetry-edpm-compute-openstack-edpm-ipam Complete 1/1 2m5s 25h validate-network-edpm-compute-openstack-edpm-ipam Complete 1/1 16s 25hYou can check logs by using
oc logs -f job/<job-name>, for example, if you want to check the logs from the configure-network job:$ oc logs -f jobs/configure-network-edpm-compute-openstack-edpm-ipam | tail -n2 PLAY RECAP ********************************************************************* edpm-compute-0 : ok=22 changed=0 unreachable=0 failed=0 skipped=17 rescued=0 ignored=0
10.13.1.1. Job condition messages Copia collegamentoCollegamento copiato negli appunti!
AnsibleEE jobs have an associated condition message that indicates the current state of the service job. This condition message is displayed in the MESSAGE field of the oc get job <job_name> command output. Jobs return one of the following conditions when queried:
-
Job not started: The job has not started. -
Job not found: The job could not be found. -
Job is running: The job is currently running. -
Job complete: The job execution is complete. -
Job error occurred <error_message>: The job stopped executing unexpectedly. The<error_message>is replaced with a specific error message.
To further investigate a service that is displaying a particular job condition message, view its logs by using the command oc logs job/<service>. For example, to view the logs for the repo-setup-openstack-edpm service, use the command oc logs job/repo-setup-openstack-edpm.
10.13.2. Checking the logs for a node set Copia collegamentoCollegamento copiato negli appunti!
You can access the logs for a node set to check for deployment issues.
Procedure
Retrieve pods with the
OpenStackAnsibleEElabel:$ oc get pods -l app=openstackansibleee configure-network-edpm-compute-j6r4l 0/1 Completed 0 3m36s validate-network-edpm-compute-6g7n9 0/1 Pending 0 0s validate-network-edpm-compute-6g7n9 0/1 ContainerCreating 0 11s validate-network-edpm-compute-6g7n9 1/1 Running 0 13sSSH into the pod you want to check:
Pod that is running:
$ oc rsh validate-network-edpm-compute-6g7n9Pod that is not running:
$ oc debug configure-network-edpm-compute-j6r4l
List the directories in the
/runner/artifactsmount:$ ls /runner/artifacts configure-network-edpm-compute validate-network-edpm-computeView the
stdoutfor the required artifact:$ cat /runner/artifacts/configure-network-edpm-compute/stdout
Chapter 11. Accessing the RHOSO cloud Copia collegamentoCollegamento copiato negli appunti!
You can access your Red Hat OpenStack Services on OpenShift (RHOSO) cloud to perform actions on your data plane by either accessing the OpenStackClient pod through a remote shell from your workstation, or by using a browser to access the Dashboard service (horizon) interface.
11.1. Accessing the OpenStackClient pod Copia collegamentoCollegamento copiato negli appunti!
You can execute Red Hat OpenStack Services on OpenShift (RHOSO) commands on the deployed data plane by using the OpenStackClient pod through a remote shell from your workstation. The OpenStack Operator created the OpenStackClient pod as a part of the OpenStackControlPlane resource. The OpenStackClient pod contains the client tools and authentication details that you require to perform actions on your data plane.
Prerequisites
-
You are logged on to a workstation that has access to the Red Hat OpenShift Container Platform (RHOCP) cluster as a user with
cluster-adminprivileges.
Procedure
Access the remote shell for the
OpenStackClientpod:$ oc rsh -n openstack openstackclientRun your
openstackcommands. For example, you can create adefaultnetwork with the following command:$ openstack network create defaultExit the
OpenStackClientpod:$ exit
Additional resources
11.2. Accessing the Dashboard service (horizon) interface Copia collegamentoCollegamento copiato negli appunti!
You can access the OpenStack Dashboard service (horizon) interface by providing the Dashboard service endpoint URL in a browser.
Prerequisites
- The Dashboard service is enabled on the control plane. For information about how to enable the Dashboard service, see Enabling the Dashboard service (horizon) interface in Customizing the Red Hat OpenStack Services on OpenShift deployment.
- You need to log into the Dashboard as the admin user.
Procedure
Retrieve the admin password from the
AdminPasswordparameter in theosp-secretsecret:$ oc get secret osp-secret -o jsonpath='{.data.AdminPassword}' | base64 -dRetrieve the Dashboard service endpoint URL:
$ oc get horizons horizon -o jsonpath='{.status.endpoint}'- Open a browser.
- Enter the Dashboard endpoint URL.
-
Log in to the Dashboard by providing the username of
adminand the admin password.
Chapter 12. Tuning NFV in a Red Hat OpenStack Services on OpenShift environment Copia collegamentoCollegamento copiato negli appunti!
You can configure a variety of parameters to tune your {rhoso_long} NFV environment.
12.1. Managing port security in NFV environments Copia collegamentoCollegamento copiato negli appunti!
Port security is an anti-spoofing measure that blocks any egress traffic that does not match the source IP and source MAC address of the originating network port. You cannot view or modify this behavior using security group rules.
By default, the port_security_enabled parameter is set to enabled on newly created Networking service (neutron) networks in Red Hat OpenStack Services on OpenShift (RHOSO) environments. Newly created ports copy the value of the port_security_enabled parameter from the network they are created on.
For some NFV use cases, such as building a firewall or router, you must disable port security.
Prerequisites
-
You have the
occommand line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges.
Procedure
Access the remote shell for the OpenStackClient pod from your workstation:
$ oc rsh -n openstack openstackclientTo disable port security on a single port, run the following command:
$ openstack port set --disable-port-security <port-id>To prevent port security from being enabled on any newly created port on a network, run the following command:
$ openstack network set --disable-port-security <network-id>Exit the
openstackclientpod:$ exit
12.2. Creating and using VF ports Copia collegamentoCollegamento copiato negli appunti!
By running various OpenStack CLI client commands, you can create and use virtual function (VF) ports.
Prerequisites
-
You have the
occommand line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges.
Procedure
Access the remote shell for the OpenStackClient pod from your workstation:
$ oc rsh -n openstack openstackclientCreate a network of type
vlan.- Example
$ openstack network create trusted_vf_network \ --provider-network-type vlan --provider-segment 111 \ --provider-physical-network sriov2 --external --disable-port-security
Create a subnet.
- Example
$ openstack subnet create --network trusted_vf_network \ --ip-version 4 --subnet-range 192.168.111.0/24 --no-dhcp \ subnet-trusted_vf_network
Create a port.
- Example
Set the
vnic-typeoption todirect, and thebinding-profileoption totrue.$ openstack port create --network trusted_vf_network \ --vnic-type direct --binding-profile trusted=true \ sriov111_port_trusted
Create an instance, and bind it to the previously-created trusted port.
- Example
$ openstack server create --image rhel --flavor dpdk \ --network trusted_vf_network --port sriov111_port_trusted \ --config-drive True --wait rhel-dpdk-sriov_trusted
Exit the
openstackclientpod:$ exit
Verification
On the compute node that you created the instance, enter the following command:
$ ip link- Sample output
7: p5p2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether b4:96:91:1c:40:fa brd ff:ff:ff:ff:ff:ff vf 6 MAC fa:16:3e:b8:91:c2, vlan 111, spoof checking off, link-state auto, trust on, query_rss off vf 7 MAC fa:16:3e:84:cf:c8, vlan 111, spoof checking off, link-state auto, trust off, query_rss off
-
Verify that the trust status of the VF is
trust on. The example output contains details of an environment that contains two ports. Note thatvf 6contains the texttrust on. -
You can disable spoof checking if you set
port_security_enabled: falsein the Networking service (neutron) network, or if you include the argument--disable-port-securitywhen you run theopenstack port createcommand.
12.3. Known limitations for NUMA-aware vSwitches Copia collegamentoCollegamento copiato negli appunti!
This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.
This section lists the constraints for implementing a NUMA-aware vSwitch in a Red Hat OpenStack Services on OpenShift (RHOSO) network functions virtualization infrastructure (NFVi).
- You cannot start a VM that has two NICs connected to physnets on different NUMA nodes, if you did not specify a two-node guest NUMA topology.
- You cannot start a VM that has one NIC connected to a physnet and another NIC connected to a tunneled network on different NUMA nodes, if you did not specify a two-node guest NUMA topology.
- You cannot start a VM that has one vhost port and one VF on different NUMA nodes, if you did not specify a two-node guest NUMA topology.
- NUMA-aware vSwitch parameters are specific to overcloud roles. For example, Compute node 1 and Compute node 2 can have different NUMA topologies.
- If the interfaces of a VM have NUMA affinity, ensure that the affinity is for a single NUMA node only. You can locate any interface without NUMA affinity on any NUMA node.
- Configure NUMA affinity for data plane networks, not management networks.
- NUMA affinity for tunneled networks is a global setting that applies to all VMs.
12.4. Quality of Service (QoS) in NFVi environments Copia collegamentoCollegamento copiato negli appunti!
You can offer varying service levels for VM instances by using quality of service (QoS) policies to apply rate limits to egress and ingress traffic on Red Hat OpenStack Services on OpenShift (RHOSO) networks in a network functions virtualization infrastructure (NFVi).
In NFVi environments, QoS support is limited to the following rule types:
-
minimum bandwidthon SR-IOV, if supported by the vendor. -
bandwidth limiton SR-IOV and OVS-DPDK egress interfaces.
12.5. Creating an HCI data plane that uses DPDK Copia collegamentoCollegamento copiato negli appunti!
You can deploy your NFV infrastructure with hyperconverged nodes, by co-locating and configuring Compute and Ceph Storage services for optimized resource usage.
For more information about hyperconverged infrastructure (HCI), see Deploying a hyperconverged infrastructure environment.
12.5.1. Example NUMA node configuration Copia collegamentoCollegamento copiato negli appunti!
For increased performance, place the tenant network and Ceph object service daemon (OSD)s in one NUMA node, such as NUMA-0, and the VNF and any non-NFV VMs in another NUMA node, such as NUMA-1.
| NUMA-0 | NUMA-1 |
|---|---|
| Number of Ceph OSDs * 4 HT | Guest vCPU for the VNF and non-NFV VMs |
| DPDK lcore - 2 HT | DPDK lcore - 2 HT |
| DPDK PMD - 2 HT | DPDK PMD - 2 HT |
| NUMA-0 | NUMA-1 | |
|---|---|---|
| Ceph OSD | 32,34,36,38,40,42,76,78,80,82,84,86 | |
| DPDK-lcore | 0,44 | 1,45 |
| DPDK-pmd | 2,46 | 3,47 |
| nova | 5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87 |
12.5.2. Recommended configuration for HCI-DPDK deployments Copia collegamentoCollegamento copiato negli appunti!
The following table lists the parameters that you can tune for HCI deployments:
| Block Device Type | OSDs, Memory, vCPUs per device |
|---|---|
| NVMe | Memory : 5GB per OSD OSDs per device: 4 vCPUs per device: 3 |
| SSD | Memory : 5GB per OSD OSDs per device: 1 vCPUs per device: 4 |
| HDD | Memory : 5GB per OSD OSDs per device: 1 vCPUs per device: 1 |
Use the same NUMA node for the following functions:
- Disk controller
- Storage networks
- Storage CPU and memory
Allocate another NUMA node for the following functions of the DPDK provider network:
- NIC
- PMD CPUs
- Socket memory