5.3. 为一组带有预置备节点的 Networker 节点创建 OpenStackDataPlaneNodeSet CR


您可以为作为 Networker 节点的数据平面节点的每个逻辑分组定义 OpenStackDataPlaneNodeSet CR。您可以根据部署需要定义多个节点设置。每个节点只能包含在一个 OpenStackDataPlaneNodeSet CR 中。

您可以使用 nodeTemplate 字段配置通用属性,以应用到 OpenStackDataPlaneNodeSet CR 中的所有节点,以及节点特定属性的 nodes 字段。特定于节点的配置覆盖来自 nodeTemplate 的继承值。

提示

如需示例 OpenStackDataPlaneNodeSet CR,它配置一组没有来自预置备的 Networker 节点的 OVS-DPDK 节点的 Networker 节点,请参阅 预置备的 Networker 节点的 OpenStackDataPlaneNodeSet CR 示例

+ 对于一个示例 OpenStackDataPlaneNodeSet CR,它使用预置备的 Networker 节点配置 OVS-DPDK 的 Networker 节点集合,请参阅使用 DPDK ] 预置备的 Networker 节点示例 OpenStackDataPlaneNodeSet CR

流程

  1. 在工作站上创建一个名为 openstack_preprovisioned_networker_node_set.yaml 的文件,以定义 OpenStackDataPlaneNodeSet CR:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: networker-nodes 
    1
    
      namespace: openstack
    spec:
      env: 
    2
    
        - name: ANSIBLE_FORCE_COLOR
          value: "True"
    Copy to Clipboard Toggle word wrap
    1
    OpenStackDataPlaneNodeSet CR 名称必须是唯一的,仅包含小写字母数字字符,以及 -(hyphens)或 . (periods),以字母数字字符开头和结尾,且最大长度为 53 个字符。如有必要,将示例 name networker-nodes 替换为更准确地描述您的节点集的名称。
    2
    可选:传递给 pod 的环境变量列表。
  2. 包含 services 字段以覆盖默认服务。删除 Networker 节点不需要的 novalibvirt 和其他服务:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: networker-nodes
      namespace: openstack
    spec:
    ...
      services:
       - redhat
       - bootstrap
       - download-cache
       - reboot-os
       - configure-ovs-dpdk  
    1
    
       - configure-network
       - validate-network
       - install-os
       - configure-os
       - ssh-known-hosts
       - run-os
       - install-certs
       - ovn
       - neutron-metadata   
    2
    
       - neutron-dhcp       
    3
    Copy to Clipboard Toggle word wrap
    1
    只有在部署中使用 DPDK nics 时,才需要 configure-ovs-dpdk 服务。
    2
    只有在部署中使用 SR-IOV 端口时才需要 neutron-metadata 服务。
    3
    您可以选择在 Networker 节点上运行 neutron-dhcp 服务。如果您的部署使用 DHCP 转发,或者 dnsmasq 支持的高级 DHCP 选项,而不是 OVN DHCP 实施,则您可能不需要将 neutron-dhcp 与 OVN DHCP 一起使用。
  3. 将数据平面连接到 control plane 网络:

    spec:
      ...
      networkAttachments:
        - ctlplane
    Copy to Clipboard Toggle word wrap
  4. 启用机箱作为网关:

    spec:
    ...
      nodeTemplate:
        ansible:
        ...
        edpm_enable_chassis_gw: true
    Copy to Clipboard Toggle word wrap
  5. 指定此集合中的节点是预置备:

    spec:
    ...
      nodeTemplate:
        ansible:
        ...
        edpm_enable_chassis_gw: true
        ...
       preProvisioned: true
    Copy to Clipboard Toggle word wrap
  6. 添加您创建的 SSH 密钥 secret,以便 Ansible 可以连接到 data plane 节点:

      nodeTemplate:
        ansibleSSHPrivateKeySecret: <secret-key>
    Copy to Clipboard Toggle word wrap
    • <secret-key > 替换为您在 <link> 中为此节点创建的 SSH 密钥 Secret CR 的名称,例如 dataplane-ansible-ssh-private-key-secret
  7. 在 Red Hat OpenShift Container Platform (RHOCP)集群上的 openstack 命名空间中创建持久性卷声明(PVC)以存储日志。将 volumeMode 设置为 Filesystem,将 accessModes 设置为 ReadWriteOnce。不要为使用 NFS 卷插件的 PersistentVolume (PV)的日志请求存储。NFS 与 FIFO 不兼容,ansible-runner 创建一个 FIFO 文件来存储日志。有关 PVC 的详情,请参考 RHOCP Storage 指南中的 了解持久性存储规划部署 中的 Red Hat OpenShift Container Platform 集群要求
  8. 为 Networker 节点启用持久性日志记录:

      nodeTemplate:
        ...
        extraMounts:
          - extraVolType: Logs
            volumes:
            - name: ansible-logs
              persistentVolumeClaim:
                claimName: <pvc_name>
            mounts:
            - name: ansible-logs
              mountPath: "/runner/artifacts"
    Copy to Clipboard Toggle word wrap
    • <pvc_name > 替换为 RHOCP 集群中的 PVC 存储名称。
  9. 指定管理网络:

      nodeTemplate:
        ...
        managementNetwork: ctlplane
    Copy to Clipboard Toggle word wrap
  10. 指定用于提供用户名和密码的 Secret CR,以注册没有注册到红帽客户门户网站的节点的操作系统,并为节点启用软件仓库。以下示例演示了如何将节点注册到 Red Hat Content Delivery Network (CDN)。有关如何在 Red Hat Satellite 6.13 中注册节点的详情,请参阅管理主机

      nodeTemplate:
        ...
        ansible:
          ansibleUser: cloud-admin 
    1
    
          ansiblePort: 22
          ansibleVarsFrom:
            - secretRef:
                name: subscription-manager
            - secretRef:
                name: redhat-registry
          ansibleVars: 
    2
    
            rhc_release: 9.4
            rhc_repositories:
                - {name: "*", state: disabled}
                - {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled}
                - {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled}
                - {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled}
                - {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled}
                - {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled}
                - {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled}
            edpm_bootstrap_release_version_package: []
    Copy to Clipboard Toggle word wrap
    1
    与您在 <link> 中创建的 secret 关联的用户[创建数据平面 secret]。
    2
    自定义一组节点的 Ansible 变量。有关可以使用的 Ansible 变量的列表,请参阅 https://openstack-k8s-operators.github.io/edpm-ansible/

    有关红帽客户门户网站注册命令的完整列表,请参阅 https://access.redhat.com/solutions/253273。有关如何登录到 registry.redhat.io 的详情,请参考 https://access.redhat.com/RegistryAuthentication#creating-registry-service-accounts-6

  11. 添加网络配置模板以应用到您的网络节点。

      nodeTemplate:
        ...
        ansible:
          ...
          ansibleVars:
          ...
      nodes:
    
           ...
            neutron_physical_bridge_name: br-ex
            neutron_public_interface_name: eth0
            edpm_network_config_nmstate: true 
    1
    
            edpm_network_config_update: false 
    2
    Copy to Clipboard Toggle word wrap
    1
    os-net-config 供应商设置为 nmstate。默认值为 false。除非 nmstate 供应商有特定限制,否则将其改为 true,否则您需要使用 ifcfg 供应商。有关 nmstate 供应商的优点和限制的更多信息,请参阅规划部署中的 https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/planning_your_deployment/plan-networks_planning#plan-os-net-config_plan-network
    2
    首次部署设置的节点时,将 edpm_network_config_update 变量设置为 false。更新或使用节点集时,将 edpm_network_config_update 变量设置为 true,以便在更新时应用网络配置更改。然后,在更新或采用后将变量重置为 false。如果没有将更新的网络配置重置为 false,则每次创建包含 configure-network 服务的 OpenStackDataPlaneDeployment CR 时,都会重新应用更新的网络配置。

    以下示例使用 DPDK 将 VLAN 网络配置应用到一组 data plane Networker 节点:

            edpm_network_config_template: |
              ...
              {% set mtu_list = [ctlplane_mtu] %}
              {% for network in nodeset_networks %}
              {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
              {%- endfor %}
              {% set min_viable_mtu = mtu_list | max %}
              network_config:
              - type: ovs_user_bridge
                name: {{ neutron_physical_bridge_name }}
                mtu: {{ min_viable_mtu }}
                use_dhcp: false
                dns_servers: {{ ctlplane_dns_nameservers }}
                domain: {{ dns_search_domains }}
                addresses:
                - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
                routes: {{ ctlplane_host_routes }}
                members:
                - type: ovs_dpdk_port
                  name: dpdk0
                  members:
                  - type: interface
                    name: nic1
    
    
              - type: linux_bond
                name: bond_api
                use_dhcp: false
                bonding_options: "mode=active-backup"
                dns_servers: {{ ctlplane_dns_nameservers }}
                members:
                - type: interface
                  name: nic2
                  primary: true
    
    
              - type: vlan
                vlan_id: {{ lookup('vars', networks_lower['internalapi'] ~ '_vlan_id') }}
                device: bond_api
                addresses:
                - ip_netmask: {{ lookup('vars', networks_lower['internalapi'] ~ '_ip') }}/{{ lookup('vars', networks_lower['internalapi'] ~ '_cidr') }}
    
    
              - type: ovs_user_bridge
                name: br-link0
                use_dhcp: false
                ovs_extra: "set port br-link0 tag={{ lookup('vars', networks_lower['tenant'] ~ '_vlan_id') }}"
                addresses:
                - ip_netmask: {{ lookup('vars', networks_lower['tenant'] ~ '_ip') }}/{{ lookup('vars', networks_lower['tenant'] ~ '_cidr')}}
                members:
                - type: ovs_dpdk_bond
                  name: dpdkbond0
                  mtu: 9000
                  rx_queue: 1
                  ovs_extra: "set port dpdkbond0 bond_mode=balance-slb"
                  members:
                  - type: ovs_dpdk_port
                    name: dpdk1
                    members:
                    - type: interface
                      name: nic3
                  - type: ovs_dpdk_port
                    name: dpdk2
                    members:
                    - type: interface
                      name: nic4
    
    
              - type: ovs_user_bridge
                name: br-link1
                use_dhcp: false
                members:
                - type: ovs_dpdk_bond
                  name: dpdkbond1
                  mtu: 9000
                  rx_queue: 1
                  ovs_extra: "set port dpdkbond1 bond_mode=balance-slb"
                  members:
                  - type: ovs_dpdk_port
                    name: dpdk3
                    members:
                    - type: interface
                      name: nic5
                  - type: ovs_dpdk_port
                    name: dpdk4
                    members:
                    - type: interface
                      name: nic6
            neutron_physical_bridge_name: br-ex
    Copy to Clipboard Toggle word wrap

    以下示例将 VLAN 网络配置应用到没有 DPDK 的 data plane Networker 节点:

    edpm_network_config_template: |
              …---
              {% set mtu_list = [ctlplane_mtu] %}
              {% for network in nodeset_networks %}
              {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
              {%- endfor %}
              {% set min_viable_mtu = mtu_list | max %}
              network_config:
                - type: ovs_bridge
                  name: {{ neutron_physical_bridge_name }}
                  mtu: {{ min_viable_mtu }}
                  use_dhcp: false
                  dns_servers: {{ ctlplane_dns_nameservers }}
                  domain: {{ dns_search_domains }}
                  addresses:
                    - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
                  routes: {{ ctlplane_host_routes }}
                  members:
                    - type: interface
                      name: nic2
                      mtu: {{ min_viable_mtu }}
                      # force the MAC address of the bridge to this interface
                      primary: true
              {% for network in nodeset_networks %}
                    - type: vlan
                      mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
                      vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
                      addresses:
                        - ip_netmask: >-
                            {{
                              lookup('vars', networks_lower[network] ~ '_ip')
                            }}/{{
                              lookup('vars', networks_lower[network] ~ '_cidr')
                            }}
                      routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
              {% endfor %}
    Copy to Clipboard Toggle word wrap

    有关 data plane 网络配置的更多信息,请参阅配置 网络服务 中的 自定义 数据平面网络

  12. nodeTemplate 部分下,添加此组中节点集合的通用配置。此 OpenStackDataPlaneNodeSet 中的每个节点都会继承此配置。有关可用于配置通用节点属性的属性的信息,请参阅 Deploying Red Hat OpenStack Services on OpenShift 指南中的OpenStackDataPlaneNodeSet CR spec 属性
  13. 在此节点集中定义每个节点:

    ...
      nodes:
        edpm-networker-0: 
    1
    
          hostName: networker-0
          networks: 
    2
    
          - name: ctlplane
            subnetName: subnet1
            defaultRoute: true
            fixedIP: 192.168.122.100 
    3
    
          - name: internalapi
            subnetName: subnet1
            fixedIP: 172.17.0.100
          - name: storage
            subnetName: subnet1
            fixedIP: 172.18.0.100
          - name: tenant
            subnetName: subnet1
            fixedIP: 172.19.0.100
          ansible:
            ansibleHost: 192.168.122.100
            ansibleUser: cloud-admin
            ansibleVars: 
    4
    
              fqdn_internal_api: edpm-networker-0.example.com
        edpm-networker-1:
          hostName: edpm-networker-1
          networks:
          - name: ctlplane
            subnetName: subnet1
            defaultRoute: true
            fixedIP: 192.168.122.101
          - name: internalapi
            subnetName: subnet1
            fixedIP: 172.17.0.101
          - name: storage
            subnetName: subnet1
            fixedIP: 172.18.0.101
          - name: tenant
            subnetName: subnet1
            fixedIP: 172.19.0.101
          ansible:
            ansibleHost: 192.168.122.101
            ansibleUser: cloud-admin
            ansibleVars:
              fqdn_internal_api: edpm-networker-1.example.com
    Copy to Clipboard Toggle word wrap
    1
    节点定义引用,如 edpm-networker-0。节点集中的每个节点都必须有一个节点定义。
    2
    定义节点的 IPAM 和 DNS 记录。
    3
    为网络指定一个可预测的 IP 地址,该地址必须在 NetConfig CR 中为网络定义的分配范围内。
    4
    自定义节点的特定于节点的 Ansible 变量。
    注意
    • nodes 部分中定义的节点可以配置 nodeTemplate 部分中配置的同一 Ansible 变量。其中,为特定节点和 nodeTemplate 部分配置了 Ansible 变量,则特定于节点的值会覆盖 nodeTemplate 部分中的值。
    • 您不需要为节点复制所有 nodeTemplate Ansible 变量,以覆盖默认值并设置一些特定于节点的值。您只需要配置您要为节点覆盖的 Ansible 变量。
    • 许多 ansibleVars 在名称中包含 edpm,它代表 "External Data Plane Management"。

    有关可用于配置节点属性的属性的信息,请参阅 Deploying Red Hat OpenStack Services on OpenShift 指南中的OpenStackDataPlaneNodeSet CR spec 属性

  14. 保存 openstack_preprovisioned_networker_node_set.yaml 定义文件。
  15. 创建 data plane 资源:

    $ oc create --save-config -f openstack_preprovisioned_networker_node_set.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  16. 通过确认状态为 SetupReady 来验证 data plane 资源是否已创建:

    $ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10m
    Copy to Clipboard Toggle word wrap

    当状态为 SetupReady 时,命令会返回一个 condition met 信息,否则会返回超时错误。

    如需有关 data plane 条件和状态的信息,请参阅 在 OpenShift 上部署 Red Hat OpenStack Services 中的 Data plane 条件 和状态

  17. 验证是否为节点集合创建了 Secret 资源:

    $ oc get secret | grep openstack-data-plane
    dataplanenodeset-openstack-data-plane Opaque 1 3m50s
    Copy to Clipboard Toggle word wrap
  18. 验证是否已创建服务:

    $ oc get openstackdataplaneservice -n openstack
    NAME                AGE
    bootstrap           46m
    ceph-client         46m
    ceph-hci-pre        46m
    configure-network   46m
    configure-os        46m
    ...
    Copy to Clipboard Toggle word wrap

以下示例 OpenStackDataPlaneNodeSet CR 从带有一些特定于节点的配置的预置备 Networker 节点创建一个节点集。将本示例中的 OpenStackDataPlaneNodeSet CR 的名称更新为描述集合中节点的名称。OpenStackDataPlaneNodeSet CR 名称必须是唯一的,仅包含小写字母数字字符和 - (hyphens)或 . (periods),以字母数字字符开头和结尾,且最大长度为 53 个字符。

apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
metadata:
  name: openstack-networker-nodes
  namespace: openstack
spec:
  services:
      - bootstrap
      - download-cache
      - reboot-os
      - configure-network
      - validate-network
      - install-os
      - configure-os
      - ssh-known-hosts
      - run-os
      - install-certs
      - ovn

  env:
    - name: ANSIBLE_FORCE_COLOR
      value: "True"
  networkAttachments:
    - ctlplane
  preProvisioned: true
  nodeTemplate:
    ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret
    extraMounts:
      - extraVolType: Logs
        volumes:
        - name: ansible-logs
          persistentVolumeClaim:
            claimName: <pvc_name>
        mounts:
        - name: ansible-logs
          mountPath: "/runner/artifacts"
    managementNetwork: ctlplane
    ansible:
      ansibleUser: cloud-admin
      ansiblePort: 22
      ansibleVarsFrom:
        - secretRef:
            name: subscription-manager
        - secretRef:
            name: redhat-registry
      ansibleVars:
        edpm_bootstrap_command: |
          rhc_release: 9.4
          rhc_repositories:
            - {name: "*", state: disabled}
            - {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled}
            - {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled}
            - {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled}
            - {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled}
            - {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled}
            - {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled}
        edpm_bootstrap_release_version_package: []
        ...
        neutron_physical_bridge_name: br-ex
        edpm_network_config_template: |
          ---
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          network_config:
          - type: ovs_bridge
            name: {{ neutron_physical_bridge_name }}
            mtu: {{ min_viable_mtu }}
            use_dhcp: false
            dns_servers: {{ ctlplane_dns_nameservers }}
            domain: {{ dns_search_domains }}
            addresses:
            - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
            routes: {{ ctlplane_host_routes }}
            members:
            - type: interface
              name: nic1
              mtu: {{ min_viable_mtu }}
              # force the MAC address of the bridge to this interface
              primary: true
          {% for network in nodeset_networks %}
            - type: vlan
              mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
              vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
              addresses:
              - ip_netmask:
                  {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
              routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
          {% endfor %}
  nodes:
    edpm-networker-0:
      hostName: edpm-networker-0
      networks:
      - name: ctlplane
        subnetName: subnet1
        defaultRoute: true
        fixedIP: 192.168.122.100
      - name: internalapi
        subnetName: subnet1
        fixedIP: 172.17.0.100
      - name: storage
        subnetName: subnet1
        fixedIP: 172.18.0.100
      - name: tenant
        subnetName: subnet1
        fixedIP: 172.19.0.100
      ansible:
        ansibleHost: 192.168.122.100
        ansibleUser: cloud-admin
        ansibleVars:
          fqdn_internal_api: edpm-networker-0.example.com
    edpm-networker-1:
      hostName: edpm-networker-1
      networks:
      - name: ctlplane
        subnetName: subnet1
        defaultRoute: true
        fixedIP: 192.168.122.101
      - name: internalapi
        subnetName: subnet1
        fixedIP: 172.17.0.101
      - name: storage
        subnetName: subnet1
        fixedIP: 172.18.0.101
      - name: tenant
        subnetName: subnet1
        fixedIP: 172.19.0.101
      ansible:
        ansibleHost: 192.168.122.101
        ansibleUser: cloud-admin
        ansibleVars:
          fqdn_internal_api: edpm-networker-1.example.com
Copy to Clipboard Toggle word wrap

以下示例 OpenStackDataPlaneNodeSet CR 从带有 OVS-DPDK 和一些节点特定配置的预置备的 Networker 节点创建一个节点集。将本示例中的 OpenStackDataPlaneNodeSet CR 的名称更新为描述集合中节点的名称。OpenStackDataPlaneNodeSet CR 名称必须是唯一的,仅包含小写字母数字字符,以及 -(hyphens)或 . (periods),以字母数字字符开头和结尾,且最大长度为 53 个字符。

apiVersion: v1
kind: ConfigMap
metadata:
  name: networker-nodeset-values
  annotations:
    config.kubernetes.io/local-config: "true"
data:
  root_password: cmVkaGF0Cg==
  preProvisioned: false
  baremetalSetTemplate:
    ctlplaneInterface: <control plane interface>
    cloudUserName: cloud-admin
    provisioningInterface: <provisioning network interface>
    bmhLabelSelector:
      app: openstack-networker
    passwordSecret:
      name: baremetalset-password-secret
      namespace: openstack
  ssh_keys:
    # Authorized keys that will have access to the dataplane networkers via SSH
    authorized: <authorized key>
    # The private key that will have access to the dataplane networkers via SSH
    private: <private key>
    # The public key that will have access to the dataplane networkers via SSH
    public: <public key>
  nodeset:
    ansible:
      ansibleUser: cloud-admin
      ansiblePort: 22
      ansibleVars:
        edpm_enable_chassis_gw: true
        ...
       ansibleVarsFrom:
        - secretRef:
            name: subscription-manager
        - secretRef:
            name: redhat-registry
      ansibleVars:
        edpm_bootstrap_command: |
        rhc_release: 9.4
        rhc_repositories:
            - {name: "*", state: disabled}
            - {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled}
            - {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled}
            - {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled}
            - {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled}
            - {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled}
            - {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled}

          edpm_bootstrap_release_version_package: []
        ...
        edpm_network_config_template: |
          ...
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          network_config:
          - type: interface
            name: nic1
            use_dhcp: false


          - type: interface
            name: nic2
            use_dhcp: false


          - type: ovs_user_bridge
            name: {{ neutron_physical_bridge_name }}
            mtu: {{ min_viable_mtu }}
            use_dhcp: false
            dns_servers: {{ ctlplane_dns_nameservers }}
            domain: {{ dns_search_domains }}
            addresses:
            - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
            routes: {{ ctlplane_host_routes }}
            members:
            - type: ovs_dpdk_port
              rx_queue: 1
              name: dpdk0
              members:
              - type: interface
                name: nic3
        # These vars are for the network config templates themselves and are
        # considered EDPM network defaults.
        neutron_physical_bridge_name: br-ex
        neutron_public_interface_name: nic1
        # edpm_nodes_validation
        edpm_nodes_validation_validate_controllers_icmp: false
        edpm_nodes_validation_validate_gateway_icmp: false
        dns_search_domains: []
        gather_facts: false
        # edpm firewall, change the allowed CIDR if needed
        edpm_sshd_configure_firewall: true
        edpm_sshd_allowed_ranges:
          - 192.168.122.0/24
    networks:
      - defaultRoute: true
        name: ctlplane
        subnetName: subnet1
      - name: internalapi
        subnetName: subnet1
      - name: storage
        subnetName: subnet1
      - name: tenant
        subnetName: subnet1
    nodes:
      edpm-networker-0:
        hostName: edpm-networker-0
    services:
      - bootstrap
      - download-cache
      - reboot-os
      - configure-ovs-dpdk
      - configure-network
      - validate-network
      - install-os
      - configure-os
      - ssh-known-hosts
      - run-os
      - install-certs
      - ovn
      - neutron-metadata
Copy to Clipboard Toggle word wrap
返回顶部
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2025 Red Hat