Este conteúdo não está disponível no idioma selecionado.
Chapter 5. Configuring Networker nodes
In a Red Hat OpenStack Services on OpenShift (RHOSO) environment, you can add Networker nodes to the RHOSO data plane. Networker nodes can serve as gateways to external networks.
With or without gateways, Networker nodes can serve other purposes as well. For example, Networker nodes are required when you deploy the neutron-dhcp-agent in a RHOSO environment that has a routed spine-leaf network topology with DHCP relays running on leaf nodes. Networker nodes can also provide metadata for SR-IOV ports.
If your NICs support DPDK, you can enable DPDK on the Networker node interfaces to accelerate gateway traffic processing.
Networker nodes are similar to other RHOSO data plane nodes such as Compute nodes. Like Compute nodes, Networker nodes use the RHEL 9.4 or 9.6 operating system. Networker nodes and Compute nodes share some common services and configuration features, and each has a set of role-specific services and configurations. For example, unlike Compute nodes, Networker nodes do not require the Nova or libvirt services.
A data plane typically consists of multiple OpenStackDataPlaneNodeSet custom resources (CRs) to define sets of nodes with different configurations and roles. For example, one node set might define your data plane Networker nodes. Others might define functionally related sets of Compute nodes.
You can use pre-provisioned or unprovisioned nodes in an OpenStackDataPlaneNodeSet CR:
- Pre-provisioned node: You have used your own tooling to install the operating system on the node before adding it to the data plane.
- Unprovisioned node: The node does not have an operating system installed before you add it to the data plane. The node is provisioned by using the Cluster Baremetal Operator (CBO) as part of the data plane creation and deployment process.
You cannot include both pre-provisioned and unprovisioned nodes in the same OpenStackDataPlaneNodeSet CR.
To create and deploy a data plane with or without Networker nodes, you must perform the following tasks:
-
Create a
SecretCR for each node set for Ansible to use to execute commands on the data plane nodes (Networker nodes and Compute nodes). Create the
OpenStackDataPlaneNodeSetCRs that define the nodes and layout of the data plane.One of the following procedures describes how to create Networker node sets with pre-provisioned nodes. The other describes how to create Networker node sets with unprovisioned bare-metal nodes that must be provisioned during the node set deployment.
-
Create the
OpenStackDataPlaneDeploymentCR that triggers the Ansible execution that deploys and configures the software for the specified list ofOpenStackDataPlaneNodeSetCRs.
5.1. Prerequisites Copiar o linkLink copiado para a área de transferência!
- A functional control plane, created with the OpenStack Operator.
-
You are logged on to a workstation that has access to the Red Hat OpenShift Container Platform (RHOCP) cluster as a user with
cluster-adminprivileges.
5.2. Creating the data plane secrets Copiar o linkLink copiado para a área de transferência!
You must create the Secret custom resources (CRs) that the data plane requires to be able to operate. The Secret CRs are used by the data plane nodes to secure access between nodes, to register the node operating systems with the Red Hat Customer Portal, to enable node repositories, and to provide Compute nodes with access to libvirt.
To enable secure access between nodes, you must generate two SSH keys and create an SSH key Secret CR for each key:
An SSH key to enable Ansible to manage the RHEL nodes on the data plane. Ansible executes commands with this user and key. You can create an SSH key for each
OpenStackDataPlaneNodeSetCR in your data plane.- An SSH key to enable migration of instances between Compute nodes.
Prerequisites
-
Pre-provisioned nodes are configured with an SSH public key in the
$HOME/.ssh/authorized_keysfile for a user with passwordlesssudoprivileges. For more information, see Managing sudo access in the RHEL Configuring basic system settings guide.
Procedure
For unprovisioned nodes, create the SSH key pair for Ansible:
$ ssh-keygen -f <key_file_name> -N "" -t rsa -b 4096-
Replace
<key_file_name>with the name to use for the key pair.
-
Replace
Create the
SecretCR for Ansible and apply it to the cluster:$ oc create secret generic dataplane-ansible-ssh-private-key-secret \ --save-config \ --dry-run=client \ --from-file=ssh-privatekey=<key_file_name> \ --from-file=ssh-publickey=<key_file_name>.pub \ [--from-file=authorized_keys=<key_file_name>.pub] -n openstack \ -o yaml | oc apply -f --
Replace
<key_file_name>with the name and location of your SSH key pair file. -
Optional: Only include the
--from-file=authorized_keysoption for bare-metal nodes that must be provisioned when creating the data plane.
-
Replace
If you are creating Compute nodes, create a secret for migration.
Create the SSH key pair for instance migration:
$ ssh-keygen -f ./nova-migration-ssh-key -t ecdsa-sha2-nistp521 -N ''Create the
SecretCR for migration and apply it to the cluster:$ oc create secret generic nova-migration-ssh-key \ --save-config \ --from-file=ssh-privatekey=nova-migration-ssh-key \ --from-file=ssh-publickey=nova-migration-ssh-key.pub \ -n openstack \ -o yaml | oc apply -f -
For nodes that have not been registered to the Red Hat Customer Portal, create the
SecretCR for subscription-manager credentials to register the nodes:$ oc create secret generic subscription-manager \ --from-literal rhc_auth='{"login": {"username": "<subscription_manager_username>", "password": "<subscription_manager_password>"}}'-
Replace
<subscription_manager_username>with the username you set forsubscription-manager. -
Replace
<subscription_manager_password>with the password you set forsubscription-manager.
-
Replace
Create a
SecretCR that contains the Red Hat registry credentials:$ oc create secret generic redhat-registry --from-literal edpm_container_registry_logins='{"registry.redhat.io": {"<username>": "<password>"}}'Replace
<username>and<password>with your Red Hat registry username and password credentials.For information about how to create your registry service account, see the Knowledge Base article Creating Registry Service Accounts.
If you are creating Compute nodes, create a secret for libvirt.
Create a file on your workstation named
secret_libvirt.yamlto define the libvirt secret:apiVersion: v1 kind: Secret metadata: name: libvirt-secret namespace: openstack type: Opaque data: LibvirtPassword: <base64_password>Replace
<base64_password>with a base64-encoded string with maximum length 63 characters. You can use the following command to generate a base64-encoded password:$ echo -n <password> | base64TipIf you do not want to base64-encode the username and password, you can use the
stringDatafield instead of thedatafield to set the username and password.
Create the
SecretCR:$ oc apply -f secret_libvirt.yaml -n openstack
Verify that the
SecretCRs are created:$ oc describe secret dataplane-ansible-ssh-private-key-secret $ oc describe secret nova-migration-ssh-key $ oc describe secret subscription-manager $ oc describe secret redhat-registry $ oc describe secret libvirt-secret
5.3. Creating an OpenStackDataPlaneNodeSet CR for a set of Networker nodes with pre-provisioned nodes Copiar o linkLink copiado para a área de transferência!
You can define an OpenStackDataPlaneNodeSet CR for each logical grouping of pre-provisioned nodes in your data plane that are Networker nodes. You can define as many node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet CR.
You use the nodeTemplate field to configure the common properties to apply to all nodes in an OpenStackDataPlaneNodeSet CR, and the nodes field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate.
For an example OpenStackDataPlaneNodeSet CR that configures a set of pre-provisioned Networker nodes, see Example OpenStackDataPlaneNodeSet CR for pre-provisioned Networker nodes.
If you want to use OVS-DPDK on a set of pre-provisioned Networker nodes, you must use a different configuration in the OpenStackDataPlaneNodeSet CR. For an example, see Example OpenStackDataPlaneNodeSet CR for pre-provisioned Networker nodes with DPDK.
Procedure
Create a file on your workstation named
openstack_preprovisioned_networker_node_set.yamlto define theOpenStackDataPlaneNodeSetCR:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: networker-nodes namespace: openstack spec: env: - name: ANSIBLE_FORCE_COLOR value: "True"-
name- TheOpenStackDataPlaneNodeSetCR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters. If necessary, replace the example namenetworker-nodeswith a name that more accurately describes your node set. -
env- Optional: a list of environment variables to pass to the pod.
-
Include the
servicesfield to override the default services. Remove thenova,libvirt, and other services that are not required by a Networker node:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: networker-nodes namespace: openstack spec: ... services: - redhat - bootstrap - download-cache - reboot-os - configure-ovs-dpdk - configure-network - validate-network - install-os - configure-os - ssh-known-hosts - run-os - install-certs - ovn - neutron-metadata - neutron-dhcp-
configure-ovs-dpdk- Theconfigure-ovs-dpdkservice is required only when DPDK nics are used in the deployment. -
neutron-metadata- Theneutron-metadataservice is required only when SR-IOV ports are used in the deployment. -
neutron-dhcp- You can optionally run theneutron-dhcpservice on your Networker nodes. You might not need to useneutron-dhcpwith OVN if your deployment uses DHCP relays, or advanced DHCP options that are supported bydnsmasqbut not by the OVN DHCP implementation. .
-
Connect the data plane to the control plane network:
spec: ... networkAttachments: - ctlplaneEnable the chassis as gateway:
spec: ... nodeTemplate: ansible: ... edpm_enable_chassis_gw: trueSpecify that the nodes in this set are pre-provisioned:
spec: ... nodeTemplate: ansible: ... edpm_enable_chassis_gw: true ... preProvisioned: trueAdd the SSH key secret that you created so that Ansible can connect to the data plane nodes:
nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key>-
Replace
<secret-key>with the name of the SSH keySecretCR you created for this node set in <link>[Creating the data plane secrets], for example,dataplane-ansible-ssh-private-key-secret.
-
Replace
-
Create a Persistent Volume Claim (PVC) in the
openstacknamespace on your Red Hat OpenShift Container Platform (RHOCP) cluster to store logs. Set thevolumeModetoFilesystemandaccessModestoReadWriteOnce. Do not request storage for logs from a PersistentVolume (PV) that uses the NFS volume plugin. NFS is incompatible with FIFO and theansible-runnercreates a FIFO file to store logs. For information about PVCs, see Understanding persistent storage in the RHOCP Storage guide and Red Hat OpenShift Container Platform cluster requirements in Planning your deployment. Enable persistent logging for the Networker nodes:
nodeTemplate: ... extraMounts: - extraVolType: Logs volumes: - name: ansible-logs persistentVolumeClaim: claimName: <pvc_name> mounts: - name: ansible-logs mountPath: "/runner/artifacts"-
Replace
<pvc_name>with the name of the PVC storage on your RHOCP cluster.
-
Replace
Specify the management network:
nodeTemplate: ... managementNetwork: ctlplaneSpecify the
SecretCRs used to source the usernames and passwords to register the operating system of the nodes that are not registered to the Red Hat Customer Portal, and enable repositories for your nodes. The following example demonstrates how to register your nodes to Red Hat Content Delivery Network (CDN). For information about how to register your nodes with Red Hat Satellite 6.13, see Managing Hosts.nodeTemplate: ... ansible: ansibleUser: cloud-admin ansiblePort: 22 ansibleVarsFrom: - secretRef: name: subscription-manager - secretRef: name: redhat-registry ansibleVars: rhc_release: 9.4 rhc_repositories: - {name: "*", state: disabled} - {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled} - {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled} - {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled} - {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled} - {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled} - {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled} edpm_bootstrap_release_version_package: []-
ansibleUser- The user associated with the secret you created in <link>[Creating the data plane secrets]. ansibleVars- The Ansible variables that customize the set of nodes. For a list of Ansible variables that you can use, see https://openstack-k8s-operators.github.io/edpm-ansible/.For a complete list of the Red Hat Customer Portal registration commands, see https://access.redhat.com/solutions/253273. For information about how to log into
registry.redhat.io, see https://access.redhat.com/RegistryAuthentication#creating-registry-service-accounts-6.
-
Add the network configuration template to apply to your Networker nodes.
nodeTemplate: ... ansible: ... ansibleVars: ... nodes: ... neutron_physical_bridge_name: br-ex neutron_public_interface_name: eth0 edpm_network_config_nmstate: true edpm_network_config_update: false-
edpm_network_config_nmstate- Sets theos-net-configprovider tonmstate. The default value istrue. Change it tofalseonly if a specific limitation of thenmstateprovider requires you to use theifcfgprovider. For more information on advantages and limitations of thenmstateprovider, see https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/planning_your_deployment/plan-networks_planning#plan-os-net-config_plan-network in Planning your deployment. edpm_network_config_update- When deploying a node set for the first time, set theedpm_network_config_updatevariable tofalse. If you later modifyedpm_network_config_template, first setedpm_network_config_updatetotrue. After you complete the update, reset it tofalse.ImportantAfter an
edpm_network_config_templateupdate, you must resetedpm_network_config_updatetofalse. Otherwise, the nodes could lose network access. Wheneveredpm_network_config_updateistrue, the updated network configuration is reapplied every time anOpenStackDataPlaneDeploymentCR is created that includes theconfigure-networkservice that is a member of theservicesOverridelist.The following example applies a VLANs network configuration to a set of the data plane Networker nodes with DPDK:
edpm_network_config_template: | ... {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_user_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: ovs_dpdk_port name: dpdk0 members: - type: interface name: nic1 - type: linux_bond name: bond_api use_dhcp: false bonding_options: "mode=active-backup" dns_servers: {{ ctlplane_dns_nameservers }} members: - type: interface name: nic2 primary: true - type: vlan vlan_id: {{ lookup('vars', networks_lower['internalapi'] ~ '_vlan_id') }} device: bond_api addresses: - ip_netmask: {{ lookup('vars', networks_lower['internalapi'] ~ '_ip') }}/{{ lookup('vars', networks_lower['internalapi'] ~ '_cidr') }} - type: ovs_user_bridge name: br-link0 use_dhcp: false ovs_extra: "set port br-link0 tag={{ lookup('vars', networks_lower['tenant'] ~ '_vlan_id') }}" addresses: - ip_netmask: {{ lookup('vars', networks_lower['tenant'] ~ '_ip') }}/{{ lookup('vars', networks_lower['tenant'] ~ '_cidr')}} members: - type: ovs_dpdk_bond name: dpdkbond0 mtu: 9000 rx_queue: 1 ovs_extra: "set port dpdkbond0 bond_mode=balance-slb" members: - type: ovs_dpdk_port name: dpdk1 members: - type: interface name: nic3 - type: ovs_dpdk_port name: dpdk2 members: - type: interface name: nic4 - type: ovs_user_bridge name: br-link1 use_dhcp: false members: - type: ovs_dpdk_bond name: dpdkbond1 mtu: 9000 rx_queue: 1 ovs_extra: "set port dpdkbond1 bond_mode=balance-slb" members: - type: ovs_dpdk_port name: dpdk3 members: - type: interface name: nic5 - type: ovs_dpdk_port name: dpdk4 members: - type: interface name: nic6 neutron_physical_bridge_name: br-exThe following example applies a VLANs network configuration to a set of data plane Networker nodes without DPDK:
edpm_network_config_template: | …--- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic2 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: >- {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %}For more information about data plane network configuration, see Customizing data plane networks in Configuring network services.
-
-
Add the common configuration for the set of nodes in this group under the
nodeTemplatesection. Each node in thisOpenStackDataPlaneNodeSetinherits this configuration. For information about the properties you can use to configure common node attributes, seeOpenStackDataPlaneNodeSetCR spec properties in the Deploying Red Hat OpenStack Services on OpenShift guide. Define each node in this node set:
... nodes: edpm-networker-0: hostName: networker-0 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.100 - name: internalapi subnetName: subnet1 fixedIP: 172.17.0.100 - name: storage subnetName: subnet1 fixedIP: 172.18.0.100 - name: tenant subnetName: subnet1 fixedIP: 172.19.0.100 ansible: ansibleHost: 192.168.122.100 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-networker-0.example.com edpm-networker-1: hostName: edpm-networker-1 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.101 - name: internalapi subnetName: subnet1 fixedIP: 172.17.0.101 - name: storage subnetName: subnet1 fixedIP: 172.18.0.101 - name: tenant subnetName: subnet1 fixedIP: 172.19.0.101 ansible: ansibleHost: 192.168.122.101 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-networker-1.example.com-
edpm-networker-0- The node definition reference, for example,edpm-networker-0. Each node in the node set must have a node definition. -
networks- Defines the IPAM and the DNS records for the node. -
fixedIP- Specifies a predictable IP address for the network that must be in the allocation range defined for the network in theNetConfigCR. ansibleVars- Node-specific Ansible variables that customize the node.Note-
Nodes defined within the
nodessection can configure the same Ansible variables that are configured in thenodeTemplatesection. Where an Ansible variable is configured for both a specific node and within thenodeTemplatesection, the node-specific values override those from thenodeTemplatesection. -
You do not need to replicate all the
nodeTemplateAnsible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node. -
Many
ansibleVarsincludeedpmin the name, which stands for "External Data Plane Management".
For information about the properties you can use to configure node attributes, see
OpenStackDataPlaneNodeSetCR spec properties in the Deploying Red Hat OpenStack Services on OpenShift guide..-
Nodes defined within the
-
-
Save the
openstack_preprovisioned_networker_node_set.yamldefinition file. Create the data plane resources:
$ oc create --save-config -f openstack_preprovisioned_networker_node_set.yaml -n openstackVerify that the data plane resources have been created by confirming that the status is
SetupReady:$ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10mWhen the status is
SetupReadythe command returns acondition metmessage, otherwise it returns a timeout error.For information about the data plane conditions and states, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.
Verify that the
Secretresource was created for the node set:$ oc get secret | grep openstack-data-plane dataplanenodeset-openstack-data-plane Opaque 1 3m50sVerify the services were created:
$ oc get openstackdataplaneservice -n openstack NAME AGE bootstrap 46m ceph-client 46m ceph-hci-pre 46m configure-network 46m configure-os 46m ...
5.3.1. Example OpenStackDataPlaneNodeSet CR for pre-provisioned Networker nodes Copiar o linkLink copiado para a área de transferência!
The following example OpenStackDataPlaneNodeSet CR creates a node set from pre-provisioned Networker nodes with some node-specific configuration. The pre-provisioned Networker nodes are provisioned when the node set is created. The example includes optional fields. Review the example and update the optional fields to the correct values for your environment or remove them before using the example in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment.
Update the name of the OpenStackDataPlaneNodeSet CR in this example to a name that reflects the nodes in the set. The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters.
apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
metadata:
name: openstack-networker-nodes
namespace: openstack
spec:
services:
- bootstrap
- download-cache
- reboot-os
- configure-network
- validate-network
- install-os
- configure-os
- ssh-known-hosts
- run-os
- install-certs
- ovn
env:
- name: ANSIBLE_FORCE_COLOR
value: "True"
networkAttachments:
- ctlplane
preProvisioned: true
nodeTemplate:
ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret
extraMounts:
- extraVolType: Logs
volumes:
- name: ansible-logs
persistentVolumeClaim:
claimName: <pvc_name>
mounts:
- name: ansible-logs
mountPath: "/runner/artifacts"
managementNetwork: ctlplane
ansible:
ansibleUser: cloud-admin
ansiblePort: 22
ansibleVarsFrom:
- secretRef:
name: subscription-manager
- secretRef:
name: redhat-registry
ansibleVars:
edpm_bootstrap_command: |
set -e
rhc_release: 9.4
rhc_repositories:
- {name: "*", state: disabled}
- {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled}
- {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled}
- {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled}
- {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled}
- {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled}
- {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled}
edpm_bootstrap_release_version_package: []
...
neutron_physical_bridge_name: br-ex
edpm_network_config_template: |
---
{% set mtu_list = [ctlplane_mtu] %}
{% for network in nodeset_networks %}
{{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
{%- endfor %}
{% set min_viable_mtu = mtu_list | max %}
network_config:
- type: ovs_bridge
name: {{ neutron_physical_bridge_name }}
mtu: {{ min_viable_mtu }}
use_dhcp: false
dns_servers: {{ ctlplane_dns_nameservers }}
domain: {{ dns_search_domains }}
addresses:
- ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
routes: {{ ctlplane_host_routes }}
members:
- type: interface
name: nic1
mtu: {{ min_viable_mtu }}
# force the MAC address of the bridge to this interface
primary: true
{% for network in nodeset_networks %}
- type: vlan
mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
addresses:
- ip_netmask:
{{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
{% endfor %}
nodes:
edpm-networker-0:
hostName: edpm-networker-0
networks:
- name: ctlplane
subnetName: subnet1
defaultRoute: true
fixedIP: 192.168.122.100
- name: internalapi
subnetName: subnet1
fixedIP: 172.17.0.100
- name: storage
subnetName: subnet1
fixedIP: 172.18.0.100
- name: tenant
subnetName: subnet1
fixedIP: 172.19.0.100
ansible:
ansibleHost: 192.168.122.100
ansibleUser: cloud-admin
ansibleVars:
fqdn_internal_api: edpm-networker-0.example.com
edpm-networker-1:
hostName: edpm-networker-1
networks:
- name: ctlplane
subnetName: subnet1
defaultRoute: true
fixedIP: 192.168.122.101
- name: internalapi
subnetName: subnet1
fixedIP: 172.17.0.101
- name: storage
subnetName: subnet1
fixedIP: 172.18.0.101
- name: tenant
subnetName: subnet1
fixedIP: 172.19.0.101
ansible:
ansibleHost: 192.168.122.101
ansibleUser: cloud-admin
ansibleVars:
fqdn_internal_api: edpm-networker-1.example.com
5.3.2. Example OpenStackDataPlaneNodeSet CR for pre-provisioned Networker nodes with DPDK Copiar o linkLink copiado para a área de transferência!
The following example OpenStackDataPlaneNodeSet CR creates a node set from pre-provisioned Networker nodes with OVS-DPDK and some node-specific configuration. The pre-provisioned Networker nodes with OVS-DPDK are provisioned when the node set is created. The example includes optional fields. Review the example and update the optional fields to the correct values for your environment or remove them before using the example in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment.
Update the name of the OpenStackDataPlaneNodeSet CR in this example to a name that reflects the nodes in the set. The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters.
apiVersion: v1
kind: ConfigMap
metadata:
name: networker-nodeset-values
annotations:
config.kubernetes.io/local-config: "true"
data:
root_password: cmVkaGF0Cg==
preProvisioned: false
baremetalSetTemplate:
ctlplaneInterface: <control plane interface>
cloudUserName: cloud-admin
provisioningInterface: <provisioning network interface>
bmhLabelSelector:
app: openstack-networker
passwordSecret:
name: baremetalset-password-secret
namespace: openstack
ssh_keys:
# Authorized keys that will have access to the dataplane networkers via SSH
authorized: <authorized key>
# The private key that will have access to the dataplane networkers via SSH
private: <private key>
# The public key that will have access to the dataplane networkers via SSH
public: <public key>
nodeset:
ansible:
ansibleUser: cloud-admin
ansiblePort: 22
ansibleVars:
edpm_enable_chassis_gw: true
...
ansibleVarsFrom:
- secretRef:
name: subscription-manager
- secretRef:
name: redhat-registry
ansibleVars:
edpm_bootstrap_command: |
set -e
rhc_release: 9.4
rhc_repositories:
- {name: "*", state: disabled}
- {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled}
- {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled}
- {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled}
- {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled}
- {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled}
- {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled}
edpm_bootstrap_release_version_package: []
...
edpm_network_config_template: |
...
{% set mtu_list = [ctlplane_mtu] %}
{% for network in nodeset_networks %}
{{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
{%- endfor %}
{% set min_viable_mtu = mtu_list | max %}
network_config:
- type: interface
name: nic1
use_dhcp: false
- type: interface
name: nic2
use_dhcp: false
- type: ovs_user_bridge
name: {{ neutron_physical_bridge_name }}
mtu: {{ min_viable_mtu }}
use_dhcp: false
dns_servers: {{ ctlplane_dns_nameservers }}
domain: {{ dns_search_domains }}
addresses:
- ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
routes: {{ ctlplane_host_routes }}
members:
- type: ovs_dpdk_port
rx_queue: 1
name: dpdk0
members:
- type: interface
name: nic3
# These vars are for the network config templates themselves and are
# considered EDPM network defaults.
neutron_physical_bridge_name: br-ex
neutron_public_interface_name: nic1
# edpm_nodes_validation
edpm_nodes_validation_validate_controllers_icmp: false
edpm_nodes_validation_validate_gateway_icmp: false
dns_search_domains: []
gather_facts: false
# edpm firewall, change the allowed CIDR if needed
edpm_sshd_configure_firewall: true
edpm_sshd_allowed_ranges:
- 192.168.122.0/24
networks:
- defaultRoute: true
name: ctlplane
subnetName: subnet1
- name: internalapi
subnetName: subnet1
- name: storage
subnetName: subnet1
- name: tenant
subnetName: subnet1
nodes:
edpm-networker-0:
hostName: edpm-networker-0
services:
- bootstrap
- download-cache
- reboot-os
- configure-ovs-dpdk
- configure-network
- validate-network
- install-os
- configure-os
- ssh-known-hosts
- run-os
- install-certs
- ovn
- neutron-metadata
5.4. Creating an OpenStackDataPlaneNodeSet CR for a set of Networker nodes with unprovisioned nodes Copiar o linkLink copiado para a área de transferência!
To create Networker nodes with unprovisioned nodes, you must perform the following tasks:
-
Create a
BareMetalHostcustom resource (CR) for each bare-metal Networker node. -
Define an
OpenStackDataPlaneNodeSetCR for the Networker nodes.
Prerequisites
- Your RHOCP cluster supports provisioning bare-metal nodes. For more information, see Planning provisioning for bare-metal data plane nodes in Planning your deployment.
- Your Cluster Baremetal Operator (CBO) is configured for provisioning. For more information, see Provisioning [metal3.io/v1alpha1] in the RHOCP API Reference.
5.4.1. Creating the BareMetalHost CRs for unprovisioned Networker nodes Copiar o linkLink copiado para a área de transferência!
You must create a BareMetalHost custom resource (CR) for each bare-metal Networker node. At a minimum, you must provide the data required to add the bare-metal Networker node on the network so that the remaining installation steps can access the node and perform the configuration.
If you use the ctlplane interface for provisioning, to avoid the kernel rp_filter logic from dropping traffic, configure the DHCP service to use an address range different from the ctlplane address range. This ensures that the return traffic remains on the machine network interface.
Procedure
The Bare Metal Operator (BMO) manages
BareMetalHostcustom resources (CRs) in theopenshift-machine-apinamespace by default. Update theProvisioningCR to watch all namespaces:$ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"watchAllNamespaces": true }}'If you are using virtual media boot for bare-metal Networker nodes and the nodes are not connected to a provisioning network, you must update the
ProvisioningCR to enablevirtualMediaViaExternalNetwork, which enables bare-metal connectivity through the external network:$ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"virtualMediaViaExternalNetwork": true }}'Create a file on your workstation that defines the
SecretCR with the credentials for accessing the Baseboard Management Controller (BMC) of each bare-metal Networker node in the node set:apiVersion: v1 kind: Secret metadata: name: edpm-networker-0-bmc-secret namespace: openstack type: Opaque data: username: <base64_username> password: <base64_password>Replace
<base64_username>and<base64_password>with strings that are base64-encoded. You can use the following command to generate a base64-encoded string:$ echo -n <string> | base64TipIf you do not want to base64-encode the username and password, you can use the
stringDatafield instead of thedatafield to set the username and password.
Create a file named
bmh_networker_nodes.yamlon your workstation, that defines theBareMetalHostCR for each bare-metal Networker node. The following example creates aBareMetalHostCR with the provisioning method Redfish virtual media:apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: edpm-networker-0 namespace: openstack labels: app: openstack-networker workload: networker spec: ... bmc: address: redfish-virtualmedia+http://192.168.111.1:8000/redfish/v1/Systems/e8efd888-f844-4fe0-9e2e-498f4ab7806d credentialsName: edpm-networker-0-bmc-secret bootMACAddress: 00:c7:e4:a7:e7:f3 bootMode: UEFI online: false [preprovisioningNetworkDataName: <network_config_secret_name>]-
labels- Metadata labels, such asapp,workload, andnodeNameare key-value pairs that provide varying levels of granularity for labelling nodes. You can use these labels when you create anOpenStackDataPlaneNodeSetCR to describe the configuration of bare-metal nodes to be provisioned or to define nodes in a node set. -
address- The URL for communicating with the node’s BMC controller. For information about BMC addressing for other provisioning methods, see BMC addressing in the RHOCP Deploying installer-provisioned clusters on bare metal guide. -
credentialsName- The name of theSecretCR you created in the previous step for accessing the BMC of the node. preprovisioningNetworkDataName- Optional: The name of the network configuration secret in the local namespace to pass to the pre-provisioning image. The network configuration must be innmstateformat.For more information about how to create a
BareMetalHostCR, see About the BareMetalHost resource in the RHOCP documentation.
-
Create the
BareMetalHostresources:$ oc create -f bmh_networker_nodes.yamlVerify that the
BareMetalHostresources have been created and are in theAvailablestate:$ oc get bmh NAME STATE CONSUMER ONLINE ERROR AGE edpm-networker-0 Available openstack-edpm true 2d21h edpm-networker-1 Available openstack-edpm true 2d21h ...
5.4.2. Creating an OpenStackDataPlaneNodeSet CR for a set of Networker nodes with unprovisioned nodes Copiar o linkLink copiado para a área de transferência!
Define an OpenStackDataPlaneNodeSet custom resource (CR) for a group of Networker nodes. You can define as many node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet CR.
You use the nodeTemplate field to configure the common properties to apply to all nodes in an OpenStackDataPlaneNodeSet CR, and the nodeTemplate.nodes field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate.
For an example OpenStackDataPlaneNodeSet CR that creates a node set from unprovisioned Networker nodes, see Example node set CR for unprovisioned Networker nodes with OVS-DPDK.
Prerequisites
-
A
BareMetalHostCR is created for each unprovisioned node that you want to include in each node set. For more information, see Creating theBareMetalHostCRs for unprovisioned nodes.
Procedure
Create a file on your workstation named
openstack_unprovisioned_node_set.yamlto define theOpenStackDataPlaneNodeSetCR:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-data-plane namespace: openstack spec: tlsEnabled: true env: - name: ANSIBLE_FORCE_COLOR value: "True"**-
name- TheOpenStackDataPlaneNodeSetCR name must be unique, contain only lower case alphanumeric characters and-(hyphens) or.(periods), start and end with an alphanumeric character, and have a maximum length of 53 characters. Update the name in this example to a name that reflects the nodes in the set. -
env- Optional: a list of environment variables to pass to the pod.
-
Connect the data plane to the control plane network:
spec: ... networkAttachments: - ctlplaneSpecify that the nodes in this set are unprovisioned and must be provisioned when creating the resource:
preProvisioned: falseDefine the
baremetalSetTemplatefield to describe the configuration of the bare-metal nodes that must be provisioned when creating the resource:baremetalSetTemplate: deploymentSSHSecret: dataplane-ansible-ssh-private-key-secret bmhNamespace: <bmh_namespace> cloudUserName: <ansible_ssh_user> bmhLabelSelector: app: <bmh_label> ctlplaneInterface: <interface>-
Replace
<bmh_namespace>with the namespace defined in the correspondingBareMetalHostCR for the node, for example,openshift-machine-api. -
Replace
<ansible_ssh_user>with the username of the Ansible SSH user, for example,cloud-admin. -
Replace
<bmh_label>with the label defined in the correspondingBareMetalHostCR for the node, for example,openstack-networker. Metadata labels, such asapp,workload, andnodeNameare key-value pairs that provide varying levels of granularity for labelling nodes. Set thebmhLabelSelectorfield to select data plane nodes based on labels that match the labels in the correspondingBareMetalHostCR. -
Replace
<interface>with the control plane interface the node connects to, for example,enp6s0.
-
Replace
If you created a custom
OpenStackProvisionServerCR, add it to yourbaremetalSetTemplatedefinition:baremetalSetTemplate: ... provisionServerName: my-os-provision-serverAdd the SSH key secret that you created to enable Ansible to connect to the data plane nodes:
nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key>-
Replace
<secret-key>with the name of the SSH keySecretCR you created in <link>[Creating the data plane secrets], for example,dataplane-ansible-ssh-private-key-secret.
-
Replace
-
Create a Persistent Volume Claim (PVC) in the
openstacknamespace on your Red Hat OpenShift Container Platform (RHOCP) cluster to store logs. Set thevolumeModetoFilesystemandaccessModestoReadWriteOnce. Do not request storage for logs from a PersistentVolume (PV) that uses the NFS volume plugin. NFS is incompatible with FIFO and theansible-runnercreates a FIFO file to write to store logs. For information about PVCs, see Understanding persistent storage in the RHOCP Storage guide and Red Hat OpenShift Container Platform cluster requirements in Planning your deployment. Enable persistent logging for the data plane nodes:
nodeTemplate: ... extraMounts: - extraVolType: Logs volumes: - name: ansible-logs persistentVolumeClaim: claimName: <pvc_name> mounts: - name: ansible-logs mountPath: "/runner/artifacts"-
Replace
<pvc_name>with the name of the PVC storage on your RHOCP cluster.
-
Replace
Specify the management network:
nodeTemplate: ... managementNetwork: ctlplaneSpecify the
SecretCRs used to source the usernames and passwords to register the operating system of the nodes that are not registered to the Red Hat Customer Portal, and enable repositories for your nodes. The following example demonstrates how to register your nodes to Red Hat Content Delivery Network (CDN). For information about how to register your nodes with Red Hat Satellite 6.13, see Managing Hosts.nodeTemplate: ansible: ansibleUser: cloud-admin ansiblePort: 22 ansibleVarsFrom: - secretRef: name: subscription-manager - secretRef: name: redhat-registry ansibleVars: rhc_release: 9.4 rhc_repositories: - {name: "*", state: disabled} - {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled} - {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled} - {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled} - {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled} - {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled} - {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled} edpm_bootstrap_release_version_package: []-
ansibleUser- The user associated with the secret you created in <link>[Creating the data plane secrets]. ansibleVars- The Ansible variables that customize the set of nodes. For a list of Ansible variables that you can use, see https://openstack-k8s-operators.github.io/edpm-ansible/.For a complete list of the Red Hat Customer Portal registration commands, see https://access.redhat.com/solutions/253273. For information about how to log into
registry.redhat.io, see https://access.redhat.com/RegistryAuthentication#creating-registry-service-accounts-6.
-
Add the network configuration template to apply to your data plane nodes.
nodeTemplate: ... ansible: ... ansiblePort: 22 ansibleUser: cloud-admin ansibleVars: ... edpm_enable_chassis_gw: true edpm_network_config_nmstate: true ... neutron_physical_bridge_name: br-ex neutron_public_interface_name: eth0 edpm_network_config_update: falseedpm_network_config_update- When deploying a node set for the first time, ensure that theedpm_network_config_updatevariable is set tofalse. If you later modifyedpm_network_config_template, first setedpm_network_config_updatetotrue. Reset it tofalseafter the update.ImportantAfter an
edpm_network_config_templateupdate, you must resetedpm_network_config_updatetofalse. Otherwise, the nodes could lose network access. Wheneveredpm_network_config_updateistrue, the updated network configuration is reapplied every time anOpenStackDataPlaneDeploymentCR is created that includes theconfigure-networkservice that is a member of theservicesOverridelist.The following example applies a VLANs network configuration to a set of the data plane Networker nodes with DPDK:
edpm_network_config_template: | ... {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_user_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: ovs_dpdk_port driver: mlx5_core name: dpdk0 mtu: {{ min_viable_mtu }} members: - type: sriov_vf device: nic6 vfid: 0 - type: interface name: nic1 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %}The following example applies a VLANs network configuration to a set of data plane Networker nodes without DPDK:
edpm_network_config_template: | …--- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic2 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: >- {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %}For more information about data plane network configuration, see Customizing data plane networks in Configuring network services.
-
Add the common configuration for the set of nodes in this group under the
nodeTemplatesection. Each node in thisOpenStackDataPlaneNodeSetinherits this configuration. For information about the properties you can use to configure common node attributes, seeOpenStackDataPlaneNodeSetCR spec properties in the Deploying Red Hat OpenStack Services on OpenShift guide. Define each node in this node set:
nodes: edpm-networker-0: hostName: networker-0 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.100 - name: internalapi subnetName: subnet1 fixedIP: 172.17.0.100 - name: storage subnetName: subnet1 fixedIP: 172.18.0.100 - name: tenant subnetName: subnet1 fixedIP: 172.19.0.100 ansible: ansibleHost: 192.168.122.100 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-networker-0.example.com bmhLabelSelector: nodeName: edpm-networker-0 edpm-networker-1: hostName: edpm-networker-1 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.101 - name: internalapi subnetName: subnet1 fixedIP: 172.17.0.101 - name: storage subnetName: subnet1 fixedIP: 172.18.0.101 - name: tenant subnetName: subnet1 fixedIP: 172.19.0.101 ansible: ansibleHost: 192.168.122.101 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-networker-1.example.com bmhLabelSelector: nodeName: edpm-networker-1-
edpm-networker-0- The node definition reference, for example,edpm-networker-0. Each node in the node set must have a node definition. -
networks- Defines the IPAM and the DNS records for the node. -
fixedIP- Specifies a predictable IP address for the network that must be in the allocation range defined for the network in theNetConfigCR. -
bmhLabelSelector- Optional: TheBareMetalHostCR metadata label that selects theBareMetalHostCR for the data plane node. The label can be any label that is defined for theBareMetalHostCR. The label is used with thebmhLabelSelectorlabel configured in thebaremetalSetTemplatedefinition to select theBareMetalHostfor the node.
Note-
Nodes defined within the
nodessection can configure the same Ansible variables that are configured in thenodeTemplatesection. Where an Ansible variable is configured for both a specific node and within thenodeTemplatesection, the node-specific values override those from thenodeTemplatesection. -
You do not need to replicate all the
nodeTemplateAnsible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node. -
Many
ansibleVarsincludeedpmin the name, which stands for "External Data Plane Management".
+ For information about the properties you can use to configure common node attributes, see
OpenStackDataPlaneNodeSetCR spec properties in the Deploying Red Hat OpenStack Services on OpenShift guide.-
-
Save the
openstack_unprovisioned_node_set.yamldefinition file. Create the data plane resources:
$ oc create --save-config -f openstack_unprovisioned_node_set.yaml -n openstackVerify that the data plane resources have been created by confirming that the status is
SetupReady:$ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10mWhen the status is
SetupReadythe command returns acondition metmessage, otherwise it returns a timeout error.For information about the data plane conditions and states, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.
Verify that the
Secretresource was created for the node set:$ oc get secret -n openstack | grep openstack-data-plane dataplanenodeset-openstack-data-plane Opaque 1 3m50sVerify that the nodes have transitioned to the
provisionedstate:$ oc get bmh NAME STATE CONSUMER ONLINE ERROR AGE edpm-networker-0 provisioned openstack-data-plane true 3d21hVerify that the services were created:
$ oc get openstackdataplaneservice -n openstack NAME AGE bootstrap 8m40s ceph-client 8m40s ceph-hci-pre 8m40s configure-network 8m40s configure-os 8m40s ...
5.4.3. Example node set CR for unprovisioned Networker nodes with OVS-DPDK Copiar o linkLink copiado para a área de transferência!
The following example OpenStackDataPlaneNodeSet CR creates a node set from unprovisioned Networker nodes with OVS-DPDK and some node-specific configuration. The unprovisioned Networker nodes are provisioned when the node set is created. Update the name of the OpenStackDataPlaneNodeSet CR in this example to a name that reflects the nodes in the set. The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters.
apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
metadata:
name: networker-nodes
namespace: openstack
services:
- redhat
- bootstrap
- download-cache
- reboot-os
- configure-ovs-dpdk
- configure-network
- validate-network
- install-os
- configure-os
- ssh-known-hosts
- run-os
- install-certs
- ovn
- neutron-metadata
nodeTemplate:
ansible:
ansibleVars:
edpm_enable_chassis_gw: true
edpm_kernel_args: default_hugepagesz=1GB hugepagesz=1G hugepages=64 iommu=pt
intel_iommu=on tsx=off isolcpus=2-47,50-95
edpm_network_config_nmstate: true
...
edpm_network_config_template: |
...
{% set mtu_list = [ctlplane_mtu] %}
{% for network in nodeset_networks %}
{{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
{%- endfor %}
{% set min_viable_mtu = mtu_list | max %}
network_config:
- type: interface
name: nic1
use_dhcp: false
- type: sriov_pf
name: nic6
mtu: 9000
numvfs: 2
use_dhcp: false
defroute: false
nm_controlled: true
hotplug: true
promisc: false
- type: ovs_user_bridge
name: {{ neutron_physical_bridge_name }}
mtu: {{ min_viable_mtu }}
use_dhcp: false
dns_servers: {{ ctlplane_dns_nameservers }}
domain: {{ dns_search_domains }}
addresses:
- ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
routes: {{ ctlplane_host_routes }}
members:
- type: ovs_dpdk_port
driver: mlx5_core
name: dpdk0
mtu: {{ min_viable_mtu }}
members:
- type: sriov_vf
device: nic6
vfid: 0
- type: linux_bond
name: bond_api
use_dhcp: false
bonding_options: "mode=active-backup"
dns_servers: {{ ctlplane_dns_nameservers }}
members:
- type: sriov_vf
device: nic6
driver: mlx5_core
mtu: {{ min_viable_mtu }}
spoofcheck: false
promisc: false
vfid: 1
primary: true
- type: vlan
vlan_id: {{ lookup('vars', networks_lower['internalapi'] ~ '_vlan_id') }}
device: bond_api
addresses:
- ip_netmask: {{ lookup('vars', networks_lower['internalapi'] ~ '_ip') }}/{{ lookup('vars', networks_lower['internalapi'] ~ '_cidr') }}
- type: ovs_user_bridge
name: br-link0
use_dhcp: false
ovs_extra: "set port br-link0 tag={{ lookup('vars', networks_lower['tenant'] ~ '_vlan_id') }}"
addresses:
- ip_netmask: {{ lookup('vars', networks_lower['tenant'] ~ '_ip') }}/{{ lookup('vars', networks_lower['tenant'] ~ '_cidr')}}
members:
- type: ovs_dpdk_bond
name: dpdkbond0
mtu: 9000
rx_queue: 1
ovs_extra: "set port dpdkbond0 bond_mode=balance-slb"
members:
- type: ovs_dpdk_port
name: dpdk1
members:
- type: interface
name: nic4
- type: ovs_dpdk_port
name: dpdk2
members:
- type: interface
name: nic5
- type: ovs_user_bridge
name: br-link1
use_dhcp: false
members:
- type: ovs_dpdk_bond
name: dpdkbond1
mtu: 9000
rx_queue: 1
ovs_extra: "set port dpdkbond1 bond_mode=balance-slb"
members:
- type: ovs_dpdk_port
name: dpdk3
members:
- type: interface
name: nic2
- type: ovs_dpdk_port
name: dpdk4
members:
- type: interface
name: nic3
edpm_ovn_bridge_mappings:
- access:br-ex
- dpdkmgmt:br-link0
- dpdkdata0:br-link1
edpm_ovs_dpdk_memory_channels: 4
edpm_ovs_dpdk_pmd_core_list: 2,3,50,51
edpm_ovs_dpdk_socket_memory: 4096,4096
edpm_tuned_isolated_cores: 2-47,50-95
edpm_tuned_profile: cpu-partitioning
neutron_physical_bridge_name: br-ex
neutron_public_interface_name: eth0
5.5. Deploying the data plane Copiar o linkLink copiado para a área de transferência!
You use the OpenStackDataPlaneDeployment CRD to configure the services on the data plane nodes and deploy the data plane. You control the execution of Ansible on the data plane by creating OpenStackDataPlaneDeployment custom resources (CRs). Each OpenStackDataPlaneDeployment CR models a single Ansible execution. When the OpenStackDataPlaneDeployment successfully completes execution, it does not automatically execute the Ansible again, even if the OpenStackDataPlaneDeployment or related OpenStackDataPlaneNodeSet resources are changed. To start another Ansible execution, you must create another OpenStackDataPlaneDeployment CR.
Create an OpenStackDataPlaneDeployment (CR) that deploys each of your OpenStackDataPlaneNodeSet CRs.
Procedure
Create a file on your workstation named
openstack_data_plane_deploy.yamlto define theOpenStackDataPlaneDeploymentCR:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: data-plane-deploy namespace: openstack-
name- TheOpenStackDataPlaneDeploymentCR name must be unique, must consist of lower case alphanumeric characters,-(hyphen) or.(period), and must start and end with an alphanumeric character. Update the name in this example to a name that reflects the node sets in the deployment.
-
Add all the
OpenStackDataPlaneNodeSetCRs that you want to deploy:spec: nodeSets: - openstack-data-plane - <nodeSet_name> - ... - <nodeSet_name>-
Replace
<nodeSet_name>with the names of theOpenStackDataPlaneNodeSetCRs that you want to include in your data plane deployment.
-
Replace
-
Save the
openstack_data_plane_deploy.yamldeployment file. Deploy the data plane:
$ oc create -f openstack_data_plane_deploy.yaml -n openstackYou can view the Ansible logs while the deployment executes:
$ oc get pod -l app=openstackansibleee -w $ oc logs -l app=openstackansibleee -f --max-log-requests 10If the
oc logscommand returns an error similar to the following error, increase the--max-log-requestsvalue:error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limitVerify that the data plane is deployed:
$ oc get openstackdataplanedeployment -n openstack NAME STATUS MESSAGE data-plane-deploy True Setup Complete $ oc get openstackdataplanenodeset -n openstack NAME STATUS MESSAGE openstack-data-plane True NodeSet ReadyFor information about the meaning of the returned status, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift
If the status indicates that the data plane has not been deployed, then troubleshoot the deployment. For information, see Troubleshooting the data plane creation and deployment in the Deploying Red Hat OpenStack Services on OpenShift guide.