5.3. 为一组带有预置备节点的 Networker 节点创建 OpenStackDataPlaneNodeSet CR
您可以为作为 Networker 节点的数据平面节点的每个逻辑分组定义 OpenStackDataPlaneNodeSet CR。您可以根据部署需要定义多个节点设置。每个节点只能包含在一个 OpenStackDataPlaneNodeSet CR 中。
您可以使用 nodeTemplate 字段配置通用属性,以应用到 OpenStackDataPlaneNodeSet CR 中的所有节点,以及节点特定属性的 nodes 字段。特定于节点的配置覆盖来自 nodeTemplate 的继承值。
如需示例 OpenStackDataPlaneNodeSet CR,它配置一组没有来自预置备的 Networker 节点的 OVS-DPDK 节点的 Networker 节点,请参阅 预置备的 Networker 节点的 OpenStackDataPlaneNodeSet CR 示例。
+ 对于一个示例 OpenStackDataPlaneNodeSet CR,它使用预置备的 Networker 节点配置 OVS-DPDK 的 Networker 节点集合,请参阅使用 DPDK ] 预置备的 Networker 节点示例 OpenStackDataPlaneNodeSet CR。
流程
在工作站上创建一个名为
openstack_preprovisioned_networker_node_set.yaml的文件,以定义OpenStackDataPlaneNodeSetCR:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: networker-nodes1 namespace: openstack spec: env:2 - name: ANSIBLE_FORCE_COLOR value: "True"包含
services字段以覆盖默认服务。删除 Networker 节点不需要的nova、libvirt和其他服务:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: networker-nodes namespace: openstack spec: ... services: - redhat - bootstrap - download-cache - reboot-os - configure-ovs-dpdk1 - configure-network - validate-network - install-os - configure-os - ssh-known-hosts - run-os - install-certs - ovn - neutron-metadata2 - neutron-dhcp3 将数据平面连接到 control plane 网络:
spec: ... networkAttachments: - ctlplane启用机箱作为网关:
spec: ... nodeTemplate: ansible: ... edpm_enable_chassis_gw: true指定此集合中的节点是预置备:
spec: ... nodeTemplate: ansible: ... edpm_enable_chassis_gw: true ... preProvisioned: true添加您创建的 SSH 密钥 secret,以便 Ansible 可以连接到 data plane 节点:
nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key>-
将
<secret-key> 替换为您在 <link> 中为此节点创建的 SSH 密钥SecretCR 的名称,例如dataplane-ansible-ssh-private-key-secret。
-
将
-
在 Red Hat OpenShift Container Platform (RHOCP)集群上的
openstack命名空间中创建持久性卷声明(PVC)以存储日志。将volumeMode设置为Filesystem,将accessModes设置为ReadWriteOnce。不要为使用 NFS 卷插件的 PersistentVolume (PV)的日志请求存储。NFS 与 FIFO 不兼容,ansible-runner创建一个 FIFO 文件来存储日志。有关 PVC 的详情,请参考 RHOCP Storage 指南中的 了解持久性存储 和 规划部署 中的 Red Hat OpenShift Container Platform 集群要求。 为 Networker 节点启用持久性日志记录:
nodeTemplate: ... extraMounts: - extraVolType: Logs volumes: - name: ansible-logs persistentVolumeClaim: claimName: <pvc_name> mounts: - name: ansible-logs mountPath: "/runner/artifacts"-
将
<pvc_name> 替换为 RHOCP 集群中的 PVC 存储名称。
-
将
指定管理网络:
nodeTemplate: ... managementNetwork: ctlplane指定用于提供用户名和密码的
SecretCR,以注册没有注册到红帽客户门户网站的节点的操作系统,并为节点启用软件仓库。以下示例演示了如何将节点注册到 Red Hat Content Delivery Network (CDN)。有关如何在 Red Hat Satellite 6.13 中注册节点的详情,请参阅管理主机。nodeTemplate: ... ansible: ansibleUser: cloud-admin1 ansiblePort: 22 ansibleVarsFrom: - secretRef: name: subscription-manager - secretRef: name: redhat-registry ansibleVars:2 rhc_release: 9.4 rhc_repositories: - {name: "*", state: disabled} - {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled} - {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled} - {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled} - {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled} - {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled} - {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled} edpm_bootstrap_release_version_package: []- 1
- 与您在 <link> 中创建的 secret 关联的用户[创建数据平面 secret]。
- 2
- 自定义一组节点的 Ansible 变量。有关可以使用的 Ansible 变量的列表,请参阅 https://openstack-k8s-operators.github.io/edpm-ansible/。
有关红帽客户门户网站注册命令的完整列表,请参阅 https://access.redhat.com/solutions/253273。有关如何登录到
registry.redhat.io的详情,请参考 https://access.redhat.com/RegistryAuthentication#creating-registry-service-accounts-6。添加网络配置模板以应用到您的网络节点。
nodeTemplate: ... ansible: ... ansibleVars: ... nodes: ... neutron_physical_bridge_name: br-ex neutron_public_interface_name: eth0 edpm_network_config_nmstate: true1 edpm_network_config_update: false2 - 1
- 将
os-net-config供应商设置为nmstate。默认值为false。除非nmstate供应商有特定限制,否则将其改为true,否则您需要使用ifcfg供应商。有关nmstate供应商的优点和限制的更多信息,请参阅规划部署中的 https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/planning_your_deployment/plan-networks_planning#plan-os-net-config_plan-network。 - 2
- 首次部署设置的节点时,将
edpm_network_config_update变量设置为false。更新或使用节点集时,将edpm_network_config_update变量设置为true,以便在更新时应用网络配置更改。然后,在更新或采用后将变量重置为false。如果没有将更新的网络配置重置为false,则每次创建包含configure-network服务的OpenStackDataPlaneDeploymentCR 时,都会重新应用更新的网络配置。
以下示例使用 DPDK 将 VLAN 网络配置应用到一组 data plane Networker 节点:
edpm_network_config_template: | ... {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_user_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: ovs_dpdk_port name: dpdk0 members: - type: interface name: nic1 - type: linux_bond name: bond_api use_dhcp: false bonding_options: "mode=active-backup" dns_servers: {{ ctlplane_dns_nameservers }} members: - type: interface name: nic2 primary: true - type: vlan vlan_id: {{ lookup('vars', networks_lower['internalapi'] ~ '_vlan_id') }} device: bond_api addresses: - ip_netmask: {{ lookup('vars', networks_lower['internalapi'] ~ '_ip') }}/{{ lookup('vars', networks_lower['internalapi'] ~ '_cidr') }} - type: ovs_user_bridge name: br-link0 use_dhcp: false ovs_extra: "set port br-link0 tag={{ lookup('vars', networks_lower['tenant'] ~ '_vlan_id') }}" addresses: - ip_netmask: {{ lookup('vars', networks_lower['tenant'] ~ '_ip') }}/{{ lookup('vars', networks_lower['tenant'] ~ '_cidr')}} members: - type: ovs_dpdk_bond name: dpdkbond0 mtu: 9000 rx_queue: 1 ovs_extra: "set port dpdkbond0 bond_mode=balance-slb" members: - type: ovs_dpdk_port name: dpdk1 members: - type: interface name: nic3 - type: ovs_dpdk_port name: dpdk2 members: - type: interface name: nic4 - type: ovs_user_bridge name: br-link1 use_dhcp: false members: - type: ovs_dpdk_bond name: dpdkbond1 mtu: 9000 rx_queue: 1 ovs_extra: "set port dpdkbond1 bond_mode=balance-slb" members: - type: ovs_dpdk_port name: dpdk3 members: - type: interface name: nic5 - type: ovs_dpdk_port name: dpdk4 members: - type: interface name: nic6 neutron_physical_bridge_name: br-ex以下示例将 VLAN 网络配置应用到没有 DPDK 的 data plane Networker 节点:
edpm_network_config_template: | …--- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic2 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: >- {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %}有关 data plane 网络配置的更多信息,请参阅配置 网络服务 中的 自定义 数据平面网络。
-
在
nodeTemplate部分下,添加此组中节点集合的通用配置。此OpenStackDataPlaneNodeSet中的每个节点都会继承此配置。有关可用于配置通用节点属性的属性的信息,请参阅 Deploying Red Hat OpenStack Services on OpenShift 指南中的OpenStackDataPlaneNodeSetCR spec 属性。 在此节点集中定义每个节点:
... nodes: edpm-networker-0:1 hostName: networker-0 networks:2 - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.1003 - name: internalapi subnetName: subnet1 fixedIP: 172.17.0.100 - name: storage subnetName: subnet1 fixedIP: 172.18.0.100 - name: tenant subnetName: subnet1 fixedIP: 172.19.0.100 ansible: ansibleHost: 192.168.122.100 ansibleUser: cloud-admin ansibleVars:4 fqdn_internal_api: edpm-networker-0.example.com edpm-networker-1: hostName: edpm-networker-1 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.101 - name: internalapi subnetName: subnet1 fixedIP: 172.17.0.101 - name: storage subnetName: subnet1 fixedIP: 172.18.0.101 - name: tenant subnetName: subnet1 fixedIP: 172.19.0.101 ansible: ansibleHost: 192.168.122.101 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-networker-1.example.com注意-
在
nodes部分中定义的节点可以配置nodeTemplate部分中配置的同一 Ansible 变量。其中,为特定节点和nodeTemplate部分配置了 Ansible 变量,则特定于节点的值会覆盖nodeTemplate部分中的值。 -
您不需要为节点复制所有
nodeTemplateAnsible 变量,以覆盖默认值并设置一些特定于节点的值。您只需要配置您要为节点覆盖的 Ansible 变量。 -
许多
ansibleVars在名称中包含edpm,它代表 "External Data Plane Management"。
有关可用于配置节点属性的属性的信息,请参阅 Deploying Red Hat OpenStack Services on OpenShift 指南中的
OpenStackDataPlaneNodeSetCR spec 属性。-
在
-
保存
openstack_preprovisioned_networker_node_set.yaml定义文件。 创建 data plane 资源:
$ oc create --save-config -f openstack_preprovisioned_networker_node_set.yaml -n openstack通过确认状态为
SetupReady来验证 data plane 资源是否已创建:$ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10m当状态为
SetupReady时,命令会返回一个condition met信息,否则会返回超时错误。如需有关 data plane 条件和状态的信息,请参阅 在 OpenShift 上部署 Red Hat OpenStack Services 中的 Data plane 条件 和状态。
验证是否为节点集合创建了
Secret资源:$ oc get secret | grep openstack-data-plane dataplanenodeset-openstack-data-plane Opaque 1 3m50s验证是否已创建服务:
$ oc get openstackdataplaneservice -n openstack NAME AGE bootstrap 46m ceph-client 46m ceph-hci-pre 46m configure-network 46m configure-os 46m ...
以下示例 OpenStackDataPlaneNodeSet CR 从带有一些特定于节点的配置的预置备 Networker 节点创建一个节点集。将本示例中的 OpenStackDataPlaneNodeSet CR 的名称更新为描述集合中节点的名称。OpenStackDataPlaneNodeSet CR 名称必须是唯一的,仅包含小写字母数字字符和 - (hyphens)或 . (periods),以字母数字字符开头和结尾,且最大长度为 53 个字符。
apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
metadata:
name: openstack-networker-nodes
namespace: openstack
spec:
services:
- bootstrap
- download-cache
- reboot-os
- configure-network
- validate-network
- install-os
- configure-os
- ssh-known-hosts
- run-os
- install-certs
- ovn
env:
- name: ANSIBLE_FORCE_COLOR
value: "True"
networkAttachments:
- ctlplane
preProvisioned: true
nodeTemplate:
ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret
extraMounts:
- extraVolType: Logs
volumes:
- name: ansible-logs
persistentVolumeClaim:
claimName: <pvc_name>
mounts:
- name: ansible-logs
mountPath: "/runner/artifacts"
managementNetwork: ctlplane
ansible:
ansibleUser: cloud-admin
ansiblePort: 22
ansibleVarsFrom:
- secretRef:
name: subscription-manager
- secretRef:
name: redhat-registry
ansibleVars:
edpm_bootstrap_command: |
rhc_release: 9.4
rhc_repositories:
- {name: "*", state: disabled}
- {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled}
- {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled}
- {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled}
- {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled}
- {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled}
- {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled}
edpm_bootstrap_release_version_package: []
...
neutron_physical_bridge_name: br-ex
edpm_network_config_template: |
---
{% set mtu_list = [ctlplane_mtu] %}
{% for network in nodeset_networks %}
{{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
{%- endfor %}
{% set min_viable_mtu = mtu_list | max %}
network_config:
- type: ovs_bridge
name: {{ neutron_physical_bridge_name }}
mtu: {{ min_viable_mtu }}
use_dhcp: false
dns_servers: {{ ctlplane_dns_nameservers }}
domain: {{ dns_search_domains }}
addresses:
- ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
routes: {{ ctlplane_host_routes }}
members:
- type: interface
name: nic1
mtu: {{ min_viable_mtu }}
# force the MAC address of the bridge to this interface
primary: true
{% for network in nodeset_networks %}
- type: vlan
mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
addresses:
- ip_netmask:
{{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
{% endfor %}
nodes:
edpm-networker-0:
hostName: edpm-networker-0
networks:
- name: ctlplane
subnetName: subnet1
defaultRoute: true
fixedIP: 192.168.122.100
- name: internalapi
subnetName: subnet1
fixedIP: 172.17.0.100
- name: storage
subnetName: subnet1
fixedIP: 172.18.0.100
- name: tenant
subnetName: subnet1
fixedIP: 172.19.0.100
ansible:
ansibleHost: 192.168.122.100
ansibleUser: cloud-admin
ansibleVars:
fqdn_internal_api: edpm-networker-0.example.com
edpm-networker-1:
hostName: edpm-networker-1
networks:
- name: ctlplane
subnetName: subnet1
defaultRoute: true
fixedIP: 192.168.122.101
- name: internalapi
subnetName: subnet1
fixedIP: 172.17.0.101
- name: storage
subnetName: subnet1
fixedIP: 172.18.0.101
- name: tenant
subnetName: subnet1
fixedIP: 172.19.0.101
ansible:
ansibleHost: 192.168.122.101
ansibleUser: cloud-admin
ansibleVars:
fqdn_internal_api: edpm-networker-1.example.com
以下示例 OpenStackDataPlaneNodeSet CR 从带有 OVS-DPDK 和一些节点特定配置的预置备的 Networker 节点创建一个节点集。将本示例中的 OpenStackDataPlaneNodeSet CR 的名称更新为描述集合中节点的名称。OpenStackDataPlaneNodeSet CR 名称必须是唯一的,仅包含小写字母数字字符,以及 -(hyphens)或 . (periods),以字母数字字符开头和结尾,且最大长度为 53 个字符。
apiVersion: v1
kind: ConfigMap
metadata:
name: networker-nodeset-values
annotations:
config.kubernetes.io/local-config: "true"
data:
root_password: cmVkaGF0Cg==
preProvisioned: false
baremetalSetTemplate:
ctlplaneInterface: <control plane interface>
cloudUserName: cloud-admin
provisioningInterface: <provisioning network interface>
bmhLabelSelector:
app: openstack-networker
passwordSecret:
name: baremetalset-password-secret
namespace: openstack
ssh_keys:
# Authorized keys that will have access to the dataplane networkers via SSH
authorized: <authorized key>
# The private key that will have access to the dataplane networkers via SSH
private: <private key>
# The public key that will have access to the dataplane networkers via SSH
public: <public key>
nodeset:
ansible:
ansibleUser: cloud-admin
ansiblePort: 22
ansibleVars:
edpm_enable_chassis_gw: true
...
ansibleVarsFrom:
- secretRef:
name: subscription-manager
- secretRef:
name: redhat-registry
ansibleVars:
edpm_bootstrap_command: |
rhc_release: 9.4
rhc_repositories:
- {name: "*", state: disabled}
- {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled}
- {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled}
- {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled}
- {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled}
- {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled}
- {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled}
edpm_bootstrap_release_version_package: []
...
edpm_network_config_template: |
...
{% set mtu_list = [ctlplane_mtu] %}
{% for network in nodeset_networks %}
{{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
{%- endfor %}
{% set min_viable_mtu = mtu_list | max %}
network_config:
- type: interface
name: nic1
use_dhcp: false
- type: interface
name: nic2
use_dhcp: false
- type: ovs_user_bridge
name: {{ neutron_physical_bridge_name }}
mtu: {{ min_viable_mtu }}
use_dhcp: false
dns_servers: {{ ctlplane_dns_nameservers }}
domain: {{ dns_search_domains }}
addresses:
- ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
routes: {{ ctlplane_host_routes }}
members:
- type: ovs_dpdk_port
rx_queue: 1
name: dpdk0
members:
- type: interface
name: nic3
# These vars are for the network config templates themselves and are
# considered EDPM network defaults.
neutron_physical_bridge_name: br-ex
neutron_public_interface_name: nic1
# edpm_nodes_validation
edpm_nodes_validation_validate_controllers_icmp: false
edpm_nodes_validation_validate_gateway_icmp: false
dns_search_domains: []
gather_facts: false
# edpm firewall, change the allowed CIDR if needed
edpm_sshd_configure_firewall: true
edpm_sshd_allowed_ranges:
- 192.168.122.0/24
networks:
- defaultRoute: true
name: ctlplane
subnetName: subnet1
- name: internalapi
subnetName: subnet1
- name: storage
subnetName: subnet1
- name: tenant
subnetName: subnet1
nodes:
edpm-networker-0:
hostName: edpm-networker-0
services:
- bootstrap
- download-cache
- reboot-os
- configure-ovs-dpdk
- configure-network
- validate-network
- install-os
- configure-os
- ssh-known-hosts
- run-os
- install-certs
- ovn
- neutron-metadata