第 5 章 部署 DCN 节点集
节点集使用相同的流程部署,无论是否将其部署在中央位置,还是在远程位置进行部署。在单独的可用区部署每个边缘位置。例如,在 az0 上部署中央位置节点集,在 az1 上部署第一个边缘站点,等等。
5.1. 配置数据平面节点网络 复制链接链接已复制到粘贴板!
您必须配置 data plane 节点网络,以适应 Red Hat Ceph Storage 网络要求。
先决条件
- control plane 部署已完成,但尚未修改以使用 Ceph Storage。
- data plane 节点已预置备操作系统。
- data plane 节点可以通过 Ansible 可以使用的 SSH 密钥访问。
- 如果您使用 HCI,则 data plane 节点具有可供用作 Ceph OSD 的磁盘。
- 至少三个可用的数据平面节点。Ceph Storage 集群必须至少有三个节点以确保冗余。
流程
在工作站上创建一个名为
dcn-data-plane-networks.yaml的文件,以防御OpenStackDataPlaneNodeSetCR,以配置 dataplane 节点网络:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: dcn-data-plane-networks namespace: openstack spec: env: - name: ANSIBLE_FORCE_COLOR value: "True"指定要应用到节点的服务:
spec: ... services: - bootstrap - configure-network - validate-network - install-os - ceph-hci-pre - configure-os - ssh-known-hosts - run-os - reboot-os在 data plane 中设置 edpm_enable_chassis_gw 和 edpm_ovn_availability_zones 字段:
spec: env: - name: ANSIBLE_FORCE_COLOR value: "True" networkAttachments: - ctlplane nodeTemplate: ansible: ansiblePort: 22 ansibleUser: cloud-admin ansibleVars: edpm_enable_chassis_gw: true edpm_ovn_availability_zones: - az0可选:
ceph-hci-pre服务准备 data plane 节点,以便在网络配置使用edpm_ceph_hci_pre edpm-ansible角色后托管 Red Hat Ceph Storage 服务。默认情况下,此角色的edpm_ceph_hci_pre_enabled_services参数仅包含RBD、RGW和NFS服务。DCN 只在 DCN 站点支持RBD服务。如果您要部署 HCI,请通过添加edpm_ceph_hci_pre_enabled_services参数来禁用 RGW 和 NFS 服务,并仅添加 ceph RBD 服务。apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-edpm namespace: openstack spec: env: - name: ANSIBLE_FORCE_COLOR value: "True" networkAttachments: - ctlplane nodeTemplate: ansible: ansiblePort: 22 ansibleUser: cloud-admin ansibleVars: edpm_ceph_hci_pre_enabled_services: - ceph_mon - ceph_mgr - ceph_osd ...注意如果其他服务(如 Dashboard)部署有 HCI 节点,则必须将它们添加到
edpm_ceph_hci_pre_enabled_services参数列表中。有关此角色的更多信息,请参阅 edpm_ceph_hci_pre 角色。配置 Red Hat Ceph Storage 集群网络以进行存储管理。
以下示例有 3 个节点。它假定存储管理位于
VLAN23上:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-edpm namespace: openstack spec: env: - name: ANSIBLE_FORCE_COLOR value: "True" networkAttachments: - ctlplane nodeTemplate: ansible: ansiblePort: 22 ansibleUser: cloud-admin ansibleVars: edpm_ceph_hci_pre_enabled_services: - ceph_mon - ceph_mgr - ceph_osd edpm_fips_mode: check edpm_iscsid_image: {{ registry_url }}/openstack-iscsid:{{ image_tag }} edpm_logrotate_crond_image: {{ registry_url }}/openstack-cron:{{ image_tag }} edpm_network_config_hide_sensitive_logs: false edpm_network_config_os_net_config_mappings: edpm-compute-0: nic1: 52:54:00:1e:af:6b nic2: 52:54:00:d9:cb:f4 edpm-compute-1: nic1: 52:54:00:f2:bc:af nic2: 52:54:00:f1:c7:dd edpm-compute-2: nic1: 52:54:00:dd:33:14 nic2: 52:54:00:50:fb:c3 edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic2 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup(vars, networks_lower[network] ~ _mtu) }} vlan_id: {{ lookup(vars, networks_lower[network] ~ _vlan_id) }} addresses: - ip_netmask: {{ lookup(vars, networks_lower[network] ~ _ip) }}/{{ lookup(vars, networks_lower[network] ~ _cidr) }} routes: {{ lookup(vars, networks_lower[network] ~ _host_routes) }} {% endfor %} edpm_neutron_metadata_agent_image: {{ registry_url }}/openstack-neutron-metadata-agent-ovn:{{ image_tag }} edpm_nodes_validation_validate_controllers_icmp: false edpm_nodes_validation_validate_gateway_icmp: false edpm_selinux_mode: enforcing edpm_sshd_allowed_ranges: - 192.168.111.0/24 - 192.168.122.0/24 - 192.168.133.0/24 - 192.168.144.0/24 edpm_sshd_configure_firewall: true enable_debug: false gather_facts: false image_tag: current-podified neutron_physical_bridge_name: br-ex neutron_public_interface_name: eth0 service_net_map: nova_api_network: internalapi nova_libvirt_network: internalapi storage_mgmt_cidr: "24" storage_mgmt_host_routes: [] storage_mgmt_mtu: 9000 storage_mgmt_vlan_id: 23 storage_mtu: 9000 timesync_ntp_servers: - hostname: pool.ntp.org ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret managementNetwork: ctlplane networks: - defaultRoute: true name: ctlplane subnetName: subnet1 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: tenant subnetName: subnet1 nodes: edpm-compute-0: ansible: host: 192.168.122.100 hostName: compute-0 networks: - defaultRoute: true fixedIP: 192.168.122.100 name: ctlplane subnetName: subnet1 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: storagemgmt subnetName: subnet1 - name: tenant subnetName: subnet1 edpm-compute-1: ansible: host: 192.168.122.101 hostName: compute-1 networks: - defaultRoute: true fixedIP: 192.168.122.101 name: ctlplane subnetName: subnet1 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: storagemgmt subnetName: subnet1 - name: tenant subnetName: subnet1 edpm-compute-2: ansible: host: 192.168.122.102 hostName: compute-2 networks: - defaultRoute: true fixedIP: 192.168.122.102 name: ctlplane subnetName: subnet1 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: storagemgmt subnetName: subnet1 - name: tenant subnetName: subnet1 preProvisioned: true services: - bootstrap - configure-network - validate-network - install-os - ceph-hci-pre - configure-os - ssh-known-hosts - run-os - reboot-os应用 CR:
$ oc apply -f <dataplane_cr_file>将
<dataplane_cr_file> 替换为您的文件的名称。注意Ansible 在创建
OpenStackDataPlaneDeploymentCRD 之前,不会配置或验证网络。
-
创建
OpenStackDataPlaneDeploymentCRD,如在 OpenShift 上部署 Red Hat OpenStack Services on OpenShift 指南中的创建 data plane 所述,它定义了OpenStackDataPlaneNodeSetCRD 文件,以便 Ansible 配置 data plane 节点上的服务。 要确认网络配置,请完成以下步骤:
- SSH 到数据平面节点。
-
使用
ip a命令显示配置网络。 - 确认存储网络位于配置网络列表中。