7.4. 将网络服务用于 RHOSO 数据平面


将现有 Red Hat OpenStack Platform 中的 Networker 服务部署到 OpenShift (RHOSO)数据平面上的 Red Hat OpenStack Services 中。Networker 服务可以在 Controller 节点或专用 Networker 节点上运行。您决定要在 Networker 节点上运行哪些服务,并为 Networker 节点创建单独的 OpenStackDataPlaneNodeSet 自定义资源(CR)。如果选项适用于您的环境,您可能还决定实现以下选项:

  • 根据您的拓扑,您可能需要在节点上运行 neutron-metadata 服务,特别是为 Compute 节点上托管的 SR-IOV 端口提供元数据时。
  • 如果要在 Networker 节点上运行 OVN 网关服务,请在要部署的列表中保留 ovn 服务。
  • 可选:您可以在 Networker 节点上运行 neutron-dhcp 服务,而不是在 Compute 节点上运行。除非部署使用 DHCP 转发或 dnsmasq 支持的高级 DHCP 选项,否则您可能不需要将 neutron-dhcp 与 OVN DHCP 实施一起使用。

当节点设置为 OVN chassis 网关时,将现有 Red Hat OpenStack Platform 部署中的每个 Controller 或 Networker 节点采用到 OpenShift 上的 Red Hat OpenStack Services (RHOSO)。任何参数设置为 enable-chassis-as-gw 的节点都被视为 OVN 网关机箱。在这种情况下,此类节点将在使用后成为 edpm 网络器节点。

  1. 检查运行 OVN Controller 网关代理 的节点。代理列表因您启用的服务而异:

    $ oc exec openstackclient -- openstack network agent list
    +--------------------------------------+------------------------------+--------------------------+-------------------+-------+-------+----------------------------+
    | ID                                   | Agent Type                   | Host                     | Availability Zone | Alive | State | Binary                     |
    +--------------------------------------+------------------------------+--------------------------+-------------------+-------+-------+----------------------------+
    | e5075ee0-9dd9-4f0a-a42a-6bbdf1a6111c | OVN Controller Gateway agent | controller-0.localdomain |                   | XXX   | UP    | ovn-controller             |
    | f3112349-054c-403a-b00a-e219238192b8 | OVN Controller agent         | compute-0.localdomain    |                   | XXX   | UP    | ovn-controller             |
    | af9dae2d-1c1c-55a8-a743-f84719f6406d | OVN Metadata agent           | compute-0.localdomain    |                   | XXX   | UP    | neutron-ovn-metadata-agent |
    | 51a11df8-a66e-47a2-aec0-52eb8589626c | OVN Controller Gateway agent | controller-1.localdomain |                   | XXX   | UP    | ovn-controller             |
    | bb817e5e-7832-410a-9e67-934dac8c602f | OVN Controller Gateway agent | controller-2.localdomain |                   | XXX   | UP    | ovn-controller             |
    +--------------------------------------+------------------------------+--------------------------+-------------------+-------+-------+----------------------------+

先决条件

  • 定义 shell 变量。根据上面的代理列表输出,controller-0、controller-1、controller-2 是我们的目标主机。如果您有 ControllerNetworker 节点都运行 networker 服务,那么请在下面添加所有这些主机。

    declare -A networkers
    networkers+=(
      ["controller-0.localdomain"]="192.168.122.100"
      ["controller-1.localdomain"]="192.168.122.101"
      ["controller-2.localdomain"]="192.168.122.102"
      # ...
    )
    • 根据您的环境,将 ["<node-name>"]="192.168.122.100" 替换为相应 Networker 或 Controller 节点的名称和 IP 地址。

流程

  1. 为您的节点部署 OpenStackDataPlaneNodeSet CR:

    注意

    您可以重复使用为 Compute 节点指定的 OpenStackDataPlaneNodeSet CR 中的大多数 nodeTemplate 部分。您可以省略一些变量,因为 Networker 节点上运行的一组有限服务。

    $ oc apply -f - <<EOF
    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: openstack-networker
    spec:
      tlsEnabled: false 
    1
    
      networkAttachments:
          - ctlplane
      preProvisioned: true
      services:
        - redhat
        - bootstrap
        - download-cache
        - configure-network
        - validate-network
        - install-os
        - configure-os
        - ssh-known-hosts
        - run-os
        - install-certs
        - ovn
      env:
        - name: ANSIBLE_CALLBACKS_ENABLED
          value: "profile_tasks"
        - name: ANSIBLE_FORCE_COLOR
          value: "True"
      nodes:
        controller-0:
          hostName: controller-0
          ansible:
            ansibleHost: ${networkers[controller-0.localdomain]}
          networks:
          - defaultRoute: true
            fixedIP: ${networkers[controller-0.localdomain]}
            name: ctlplane
            subnetName: subnet1
          - name: internalapi
            subnetName: subnet1
          - name: storage
            subnetName: subnet1
          - name: tenant
            subnetName: subnet1
        controller-1:
          hostName: controller-1
          ansible:
            ansibleHost: ${networkers[controller-1.localdomain]}
          networks:
          - defaultRoute: true
            fixedIP: ${networkers[controller-1.localdomain]}
            name: ctlplane
            subnetName: subnet1
          - name: internalapi
            subnetName: subnet1
          - name: storage
            subnetName: subnet1
          - name: tenant
            subnetName: subnet1
        controller-2:
          hostName: controller-2
          ansible:
            ansibleHost: ${networkers[controller-2.localdomain]}
          networks:
          - defaultRoute: true
            fixedIP: ${networkers[controller-2.localdomain]}
            name: ctlplane
            subnetName: subnet1
          - name: internalapi
            subnetName: subnet1
          - name: storage
            subnetName: subnet1
          - name: tenant
            subnetName: subnet1
      nodeTemplate:
        ansibleSSHPrivateKeySecret: dataplane-adoption-secret
        ansible:
          ansibleUser: root
          ansibleVarsFrom:
          - secretRef:
              name: subscription-manager
          - secretRef:
              name: redhat-registry
          ansibleVars:
            rhc_release: 9.2
            rhc_repositories:
                - {name: "*", state: disabled}
                - {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled}
                - {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled}
                - {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled}
                - {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled}
                - {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled}
                - {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled}
            edpm_bootstrap_release_version_package: []
            # edpm_network_config
            # Default nic config template for a EDPM node
            # These vars are edpm_network_config role vars
            edpm_network_config_template: |
               ---
               {% set mtu_list = [ctlplane_mtu] %}
               {% for network in nodeset_networks %}
               {% set _ = mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) %}
               {%- endfor %}
               {% set min_viable_mtu = mtu_list | max %}
               network_config:
               - type: ovs_bridge
                 name: {{ neutron_physical_bridge_name }}
                 mtu: {{ min_viable_mtu }}
                 use_dhcp: false
                 dns_servers: {{ ctlplane_dns_nameservers }}
                 domain: {{ dns_search_domains }}
                 addresses:
                 - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
                 routes: {{ ctlplane_host_routes }}
                 members:
                 - type: interface
                   name: nic1
                   mtu: {{ min_viable_mtu }}
                   # force the MAC address of the bridge to this interface
                   primary: true
               {% for network in nodeset_networks %}
                 - type: vlan
                   mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
                   vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
                   addresses:
                   - ip_netmask:
                       {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
                   routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
               {% endfor %}
    
            edpm_network_config_hide_sensitive_logs: false
            #
            # These vars are for the network config templates themselves and are
            # considered EDPM network defaults.
            neutron_physical_bridge_name: br-ctlplane
            neutron_public_interface_name: eth0
    
            # edpm_nodes_validation
            edpm_nodes_validation_validate_controllers_icmp: false
            edpm_nodes_validation_validate_gateway_icmp: false
    
            # edpm ovn-controller configuration
            edpm_ovn_bridge_mappings: <bridge_mappings> 
    2
    
            edpm_ovn_bridge: br-int
            edpm_ovn_encap_type: geneve
            ovn_monitor_all: true
            edpm_ovn_remote_probe_interval: 60000
            edpm_ovn_ofctrl_wait_before_clear: 8000
    
            # serve as a OVN gateway
            edpm_enable_chassis_gw: true 
    3
    
    
            timesync_ntp_servers:
            - hostname: clock.redhat.com
            - hostname: clock2.redhat.com
    
    
            gather_facts: false
            enable_debug: false
            # edpm firewall, change the allowed CIDR if needed
            edpm_sshd_configure_firewall: true
            edpm_sshd_allowed_ranges: ['192.168.122.0/24']
            # SELinux module
            edpm_selinux_mode: enforcing
    
            # Do not attempt OVS major upgrades here
            edpm_ovs_packages:
            - openvswitch3.3
    EOF
    1
    如果启用了 TLS Everywhere,请将 spec:tlsEnabled 更改为 true
    2
    设置为您在 Red Hat OpenStack Platform 17.1 部署中使用的相同值。
    3
    设置为 true 以在网关模式下运行 ovn-controller
  2. 在采用前,请确保在 Networker 节点中使用的 OpenStackDataPlaneNodeSet CR 中使用相同的 ovn-controller 设置。此配置存储在 Open vSwitch 数据库的 Open_vSwitch 表中的 external_ids 列中:

    ovs-vsctl list Open .
    ...
    external_ids        : {hostname=controller-0.localdomain, ovn-bridge=br-int, ovn-bridge-mappings=<bridge_mappings>, ovn-chassis-mac-mappings="datacentre:1e:0a:bb:e6:7c:ad", ovn-cms-options=enable-chassis-as-gw, ovn-encap-ip="172.19.0.100", ovn-encap-tos="0", ovn-encap-type=geneve, ovn-match-northd-version=False, ovn-monitor-all=True, ovn-ofctrl-wait-before-clear="8000", ovn-openflow-probe-interval="60", ovn-remote="tcp:ovsdbserver-sb.openstack.svc:6642", ovn-remote-probe-interval="60000", rundir="/var/run/openvswitch", system-id="2eec68e6-aa21-4c95-a868-31aeafc11736"}
    ...
    • <bridge_mappings > 替换为您的配置中网桥映射的值,如 "datacentre:br-ctlplane "。
  3. 可选:在 OpenStackDataPlaneNodeSet CR 中启用 neutron-metadata

    $ oc patch openstackdataplanenodeset <networker_CR_name> --type='json' --patch='[
      {
        "op": "add",
        "path": "/spec/services/-",
        "value": "neutron-metadata"
      }]'
    • <networker_CR_name > 替换为您为 Networker 节点部署的 CR 名称,如 openstack-networker
  4. 可选:在 OpenStackDataPlaneNodeSet CR 中启用 neutron-dhcp

    $ oc patch openstackdataplanenodeset <networker_CR_name> --type='json' --patch='[
      {
        "op": "add",
        "path": "/spec/services/-",
        "value": "neutron-dhcp"
      }]'
  5. 为 Networker 节点运行 pre-adoption-validation 服务:

    1. 创建仅运行验证的 OpenStackDataPlaneDeployment CR:

      $ oc apply -f - <<EOF
      apiVersion: dataplane.openstack.org/v1beta1
      kind: OpenStackDataPlaneDeployment
      metadata:
        name: openstack-pre-adoption-networker
      spec:
        nodeSets:
        - openstack-networker
        servicesOverride:
        - pre-adoption-validation
      EOF
    2. 验证完成后,确认 Ansible EE pod 的状态为 Completed

      $ watch oc get pod -l app=openstackansibleee
      $ oc logs -l app=openstackansibleee -f --max-log-requests 20
    3. 等待部署变为 Ready 状态:

      $ oc wait --for condition=Ready openstackdataplanedeployment/openstack-pre-adoption-networker --timeout=10m
  6. 为 Networker 节点部署 OpenStackDataPlaneDeployment CR:

    $ oc apply -f - <<EOF
    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: openstack-networker
    spec:
      nodeSets:
      - openstack-networker
    EOF
    注意

    另外,在部署主 OpenStackDataPlaneDeployment CR 前,您可以在 nodeSets 列表中包含 Networker 节点。您不能在部署后将新节点设置为 OpenStackDataPlaneDeployment CR。

  7. 清理不再运行的任何网络服务(neutron)代理。

    注意

    在某些情况下,来自被替换或停用的旧数据平面的代理保留在 RHOSO 中。提供的这些代理可能由 RHOSO 中运行的新代理提供,或者功能可能被其他组件替代。例如,可能不再需要 DHCP 代理,因为 RHOSO 中的 OVN DHCP 可以提供此功能。

    1. 列出代理:

      $ oc exec openstackclient -- openstack network agent list
      +--------------------------------------+------------------------------+--------------------------+-------------------+-------+-------+----------------------------+
      | ID                                   | Agent Type                   | Host                     | Availability Zone | Alive | State | Binary                     |
      +--------------------------------------+------------------------------+--------------------------+-------------------+-------+-------+----------------------------+
      | e5075ee0-9dd9-4f0a-a42a-6bbdf1a6111c | OVN Controller Gateway agent | controller-0.localdomain |                   | :-)   | UP    | ovn-controller             |
      | 856960f0-5530-46c7-a331-6eadcba362da | DHCP agent                   | controller-1.localdomain | nova              | XXX   | UP    | neutron-dhcp-agent         |
      | 8bd22720-789f-45b8-8d7d-006dee862bf9 | DHCP agent                   | controller-2.localdomain | nova              | XXX   | UP    | neutron-dhcp-agent         |
      | e584e00d-be4c-4e98-a11a-4ecd87d21be7 | DHCP agent                   | controller-0.localdomain | nova              | XXX   | UP    | neutron-dhcp-agent         |
      +--------------------------------------+------------------------------+--------------------------+-------------------+-------+-------+----------------------------+
    2. 如果列表中的任何代理在 Alive 字段中显示 XXX,请验证 Host 和 Agent Type,如果不再需要此代理,且代理已在 Red Hat OpenStack Platform 主机上永久停止。然后,删除代理:

      $ oc exec openstackclient -- openstack network agent <agent_id>
      • <agent_id > 替换为要删除的代理 ID,例如 856960f0-5530-46c7-a331-6eadcba362da

验证

  1. 确认所有 Ansible EE pod 都到达 Completed 状态:

    $ watch oc get pod -l app=openstackansibleee
    $ oc logs -l app=openstackansibleee -f --max-log-requests 20
  2. 等待 data plane 节点设置为 Ready 状态:

    $ oc wait --for condition=Ready osdpns/<networker_CR_name> --timeout=30m
    • <networker_CR_name > 替换为您为 Networker 节点部署的 CR 名称,如 openstack-networker
  3. 验证 Networking 服务(neutron)代理是否正在运行。代理列表因您启用的服务而异:

    $ oc exec openstackclient -- openstack network agent list
    +--------------------------------------+------------------------------+--------------------------+-------------------+-------+-------+----------------------------+
    | ID                                   | Agent Type                   | Host                     | Availability Zone | Alive | State | Binary                     |
    +--------------------------------------+------------------------------+--------------------------+-------------------+-------+-------+----------------------------+
    | e5075ee0-9dd9-4f0a-a42a-6bbdf1a6111c | OVN Controller Gateway agent | controller-0.localdomain |                   | :-)   | UP    | ovn-controller             |
    | f3112349-054c-403a-b00a-e219238192b8 | OVN Controller agent         | compute-0.localdomain    |                   | :-)   | UP    | ovn-controller             |
    | af9dae2d-1c1c-55a8-a743-f84719f6406d | OVN Metadata agent           | compute-0.localdomain    |                   | :-)   | UP    | neutron-ovn-metadata-agent |
    | 51a11df8-a66e-47a2-aec0-52eb8589626c | OVN Controller Gateway agent | controller-1.localdomain |                   | :-)   | UP    | ovn-controller             |
    | bb817e5e-7832-410a-9e67-934dac8c602f | OVN Controller Gateway agent | controller-2.localdomain |                   | :-)   | UP    | ovn-controller             |
    +--------------------------------------+------------------------------+--------------------------+-------------------+-------+-------+----------------------------+
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2026 Red Hat
返回顶部