第 6 章 部署示例
6.1. 使用租户网络进行模型安装场景 复制链接链接已复制到粘贴板!
在本节中,您将探索在生产环境中使用 OpenStack 的 OpenDaylight 安装示例。这种场景使用隧道(VXLAN)进行租户流量分离。
6.1.1. 物理拓扑 复制链接链接已复制到粘贴板!
此场景的拓扑由六个节点组成:
- 1 个 x director undercloud 节点
- 3 个 x OpenStack overcloud 控制器,除了其他 OpenStack 服务外,还安装有 OpenDaylight SDN 控制器
- 2 个 x OpenStack overcloud Compute 节点
6.1.2. 规划物理网络环境 复制链接链接已复制到粘贴板!
overcloud Controller 节点使用三个网络接口卡(NIC):
| 名称 | 用途 |
|---|---|
| nic1 | 管理网络(例如通过 SSH 访问节点) |
| nic2 | 租户(VXLAN)载体、配置(PXE、DHCP)和 内部 API 网络 |
| nic3 | 公共 API 网络访问 |
overcloud Compute 节点配备三个 NIC:
| 名称 | 用途 |
|---|---|
| nic1 | 管理网络 |
| nic2 | 租户载波、配置 和内部 API 网络 |
| nic3 | 外部 (浮动 IP)网络 |
undercloud 节点配备两个 NIC:
| 名称 | 用途 |
|---|---|
| nic1 | 用于管理网络 |
| nic2 | 用于 Provisioning 网络 |
6.1.3. 规划 NIC 连接 复制链接链接已复制到粘贴板!
在这种情况下,环境文件使用抽象的编号接口(nic1、 nic2),而不是主机操作系统上显示的实际设备名称(如 eth0 或 eno2)。属于同一角色的主机不需要相同的网络接口设备名称。如果一个主机使用 em1 和 em2 接口,则没有问题,另一个主机使用 eno1 和 eno2。每个 NIC 都称为 nic1 和 nic2。
抽象的 NIC 方案仅依赖于实时和连接的接口。如果主机有不同数量的接口,就足以使用连接主机所需的最少接口数量。例如,如果一台主机上有 4 个物理接口和第 6 个物理接口,则您应该仅在两个主机上使用 nic1、 nic2、nic3 和 nic4 插件。
6.1.4. 规划网络、VLAN 和 IP 复制链接链接已复制到粘贴板!
此场景使用网络隔离来分隔管理、配置、内部 API、租户、公共 API 和浮动 IP 网络流量。本图形是网络配置示例。它显示自定义角色部署。如果需要,您还可以在 Red Hat OpenStack Platform conroller 中包含 OpenDaylight。这是默认的设置。在此方案中,不显示 IPMI 网络,不显示 NIC 和路由。根据 OpenStack 配置,您可能需要额外网络。
图 6.1. 在这种情况下使用的详细网络拓扑
下表显示了与每个网络关联的 VLAN ID 和 IP 子网:
| Network | VLAN ID | IP 子网 |
|---|---|---|
| 置备 | 原生 | 192.0.5.0/24 |
| 内部 API | 600 | 172.17.0.0/24 |
| 租户 | 603 | 172.16.0.0/24 |
| 公共 API | 411 | 10.35.184.144/28 |
| 浮动 IP | 412 | 10.35.186.146/28 |
OpenStack Platform director 创建 br-isolated OVS 网桥,并为网络配置文件中定义的每个网络添加 VLAN 接口。director 还会创建 br-ex 网桥并附加有相关网络接口。
确保您的物理网络交换机(提供主机间的连接)正确配置来承载这些 VLAN ID。您必须将主机面临的所有交换机端口配置为 VLAN 的" 中继 "。此处使用术语"中继"来描述允许多个 VLAN ID 穿过同一端口的端口。
物理交换机的配置指南不在本文档范围内。
您可以使用 network-environment.yaml 文件中的 TenantNetworkVlanID 属性在使用 VXLAN 隧道时为租户网络定义 VLAN 标签。例如,VXLAN 租户流量通过标有 underlay 网络的 VLAN 传输。如果需要租户网络通过原生 VLAN 运行,则此值也可以为空。另请注意,在使用 VLAN 租户网络时,可以使用除为 TenantNetworkVlanID 提供的值以外的 VLAN 标签。
6.1.5. 这种情境中使用的 OpenDaylight 配置文件 复制链接链接已复制到粘贴板!
要部署 OpenStack 和 OpenDaylight 的这种场景,在 undercloud 节点上输入以下部署命令:
$ openstack overcloud deploy --debug \
--templates \
--environment-file "$HOME/extra_env.yaml" \
--libvirt-type kvm \
-e /home/stack/baremetal-vlan/network-environment.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/neutron-opendaylight.yaml \
--log-file overcloud_install.log &> overcloud_install.log
此外,本指南将会显示此场景中使用的配置文件、其内容,同时还会提供有关使用的设置的说明。
6.1.5.1. extra_env.yaml 文件。 复制链接链接已复制到粘贴板!
该文件只有一个参数。
parameter_defaults:
OpenDaylightProviderMappings: 'datacentre:br-ex,tenant:br-isolated'
这些是每个节点的映射,由 OpenDaylight 控制,供 OpenDaylight 使用。物理网络 数据中心 将映射到 br-ex OVS 网桥,并且租户网络流量将映射到 br-isolated OVS 网桥。
6.1.5.2. undercloud.conf 文件 复制链接链接已复制到粘贴板!
此文件位于 /home/stack/baremetal-vlan/ 目录中。
文件路径指向配置文件的自定义版本。
[DEFAULT]
local_ip = 192.0.5.1/24
network_gateway = 192.0.5.1
undercloud_public_vip = 192.0.5.2
undercloud_admin_vip = 192.0.5.3
local_interface = eno2
network_cidr = 192.0.5.0/24
masquerade_network = 192.0.5.0/24
dhcp_start = 192.0.5.5
dhcp_end = 192.0.5.24
inspection_iprange = 192.0.5.100,192.0.5.120
在本例中,使用 Provisioning 网络的 192.0.5.0/24 子网。请注意,在 undercloud 节点上使用物理接口 eno2 来置备。
6.1.5.3. network-environment.yaml 文件 复制链接链接已复制到粘贴板!
这是配置网络的主要文件。它位于 /home/stack/baremetal-vlan/ 目录中。在以下文件中,为不同的网络和提供程序映射指定 VLAN ID 和 IP 子网。nic-configs 目录 controller.yaml 和 compute.yaml 中的两个文件用于为 Controller 和 Compute 节点指定网络配置。
示例中指定 Controller 节点(3)和 Compute 节点(2)。
resource_registry:
# Specify the relative/absolute path to the config files you want to use for
# override the default.
OS1::TripleO::Compute::Net::SoftwareConfig: nic-configs/compute.yaml
OS::TripleO::Controller::Net::SoftwareConfig: nic-configs/controller.yaml
# Network isolation configuration
# Service section
# If some service should be disable, use the following example
# OS::TripleO::Network::Management: OS::Heat::None
OS::TripleO::Network::External: /usr/share/openstack-tripleo-heat-templates/network/external.yaml
OS::TripleO::Network::InternalApi: /usr/share/openstack-tripleo-heat-templates/network/internal_api.yaml
OS::TripleO::Network::Tenant: /usr/share/openstack-tripleo-heat-templates/network/tenant.yaml
OS::TripleO::Network::Management: OS::Heat::None
OS::TripleO::Network::StorageMgmt: OS::Heat::None
OS::TripleO::Network::Storage: OS::Heat::None
# Port assignments for the VIP addresses
OS::TripleO::Network::Ports::ExternalVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/external.yaml
OS::TripleO::Network::Ports::InternalApiVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml
OS::TripleO::Network::Ports::RedisVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/vip.yaml
OS::TripleO::Network::Ports::StorageVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
OS::TripleO::Network::Ports::StorageMgmtVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
# Port assignments for the controller role
OS::TripleO::Controller::Ports::ExternalPort: /usr/share/openstack-tripleo-heat-templates/network/ports/external.yaml
OS::TripleO::Controller::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml
OS::TripleO::Controller::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/tenant.yaml
OS::TripleO::Controller::Ports::ManagementPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
OS::TripleO::Controller::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
OS::TripleO::Controller::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
# Port assignments for the Compute role
OS::TripleO::Compute::Ports::ExternalPort: /usr/share/openstack-tripleo-heat-templates/network/ports/external.yaml
OS::TripleO::Compute::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml
OS::TripleO::Compute::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/tenant.yaml
OS::TripleO::Compute::Ports::ManagementPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
OS::TripleO::Compute::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
OS::TripleO::Compute::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
# Port assignments for service virtual IP addresses for the controller role
OS::TripleO::Controller::Ports::RedisVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/vip.yaml
parameter_defaults:
# Customize all these values to match the local environment
InternalApiNetCidr: 172.17.0.0/24
TenantNetCidr: 172.16.0.0/24
ExternalNetCidr: 10.35.184.144/28
# CIDR subnet mask length for provisioning network
ControlPlaneSubnetCidr: '24'
InternalApiAllocationPools: [{'start': '172.17.0.10', 'end': '172.17.0.200'}]
TenantAllocationPools: [{'start': '172.16.0.100', 'end': '172.16.0.200'}]
# Use an External allocation pool which will leave room for floating IP addresses
ExternalAllocationPools: [{'start': '10.35.184.146', 'end': '10.35.184.157'}]
# Set to the router gateway on the external network
ExternalInterfaceDefaultRoute: 10.35.184.158
# Gateway router for the provisioning network (or Undercloud IP)
ControlPlaneDefaultRoute: 192.0.5.254
# Generally the IP of the Undercloud
EC2MetadataIp: 192.0.5.1
InternalApiNetworkVlanID: 600
TenantNetworkVlanID: 603
ExternalNetworkVlanID: 411
# Define the DNS servers (maximum 2) for the overcloud nodes
DnsServers: ["10.35.28.28","8.8.8.8"]
# May set to br-ex if using floating IP addresses only on native VLAN on bridge br-ex
NeutronExternalNetworkBridge: "''"
# The tunnel type for the tenant network (vxlan or gre). Set to '' to disable tunneling.
NeutronTunnelTypes: ''
# The tenant network type for Neutron (vlan or vxlan).
NeutronNetworkType: 'vxlan'
# The OVS logical->physical bridge mappings to use.
# NeutronBridgeMappings: 'datacentre:br-ex,tenant:br-isolated'
# The Neutron ML2 and OpenVSwitch vlan mapping range to support.
NeutronNetworkVLANRanges: 'datacentre:412:412'
# Nova flavor to use.
OvercloudControlFlavor: baremetal
OvercloudComputeFlavor: baremetal
# Number of nodes to deploy.
ControllerCount: 3
ComputeCount: 2
# Sets overcloud nodes custom names
# http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/node_placement.html#custom-hostnames
ControllerHostnameFormat: 'controller-%index%'
ComputeHostnameFormat: 'compute-%index%'
CephStorageHostnameFormat: 'ceph-%index%'
ObjectStorageHostnameFormat: 'swift-%index%'
6.1.5.4. controller.yaml 文件 复制链接链接已复制到粘贴板!
该文件位于 /home/stack/baremetal-vlan/nic-configs/ 目录中。在本例中,您要定义两个交换机: br-isolated 和 br-ex。nic2 将在 br-ex 下的 br-isolated 和 nic3 下:
heat_template_version: pike
description: >
Software Config to drive os-net-config to configure VLANs for the
controller role.
parameters:
ControlPlaneIp:
default: ''
description: IP address/subnet on the ctlplane network
type: string
ExternalIpSubnet:
default: ''
description: IP address/subnet on the external network
type: string
InternalApiIpSubnet:
default: ''
description: IP address/subnet on the internal API network
type: string
StorageIpSubnet:
default: ''
description: IP address/subnet on the storage network
type: string
StorageMgmtIpSubnet:
default: ''
description: IP address/subnet on the storage mgmt network
type: string
TenantIpSubnet:
default: ''
description: IP address/subnet on the tenant network
type: string
ManagementIpSubnet: # Only populated when including environments/network-management.yaml
default: ''
description: IP address/subnet on the management network
type: string
ExternalNetworkVlanID:
default: ''
description: Vlan ID for the external network traffic.
type: number
InternalApiNetworkVlanID:
default: ''
description: Vlan ID for the internal_api network traffic.
type: number
TenantNetworkVlanID:
default: ''
description: Vlan ID for the tenant network traffic.
type: number
ManagementNetworkVlanID:
default: 23
description: Vlan ID for the management network traffic.
type: number
ExternalInterfaceDefaultRoute:
default: ''
description: default route for the external network
type: string
ControlPlaneSubnetCidr: # Override this with parameter_defaults
default: '24'
description: The subnet CIDR of the control plane network.
type: string
DnsServers: # Override this with parameter_defaults
default: []
description: A list of DNS servers (2 max for some implementations) that will be added to resolv.conf.
type: comma_delimited_list
EC2MetadataIp: # Override this with parameter_defaults
description: The IP address of the EC2 metadata server.
type: string
resources:
OsNetConfigImpl:
type: OS::Heat::StructuredConfig
properties:
group: os-apply-config
config:
os_net_config:
network_config:
-
type: ovs_bridge
name: br-isolated
use_dhcp: false
dns_servers: {get_param: DnsServers}
addresses:
-
ip_netmask:
list_join:
- '/'
- - {get_param: ControlPlaneIp}
- {get_param: ControlPlaneSubnetCidr}
routes:
-
ip_netmask: 169.254.169.254/32
next_hop: {get_param: EC2MetadataIp}
members:
-
type: interface
name: nic2
# force the MAC address of the bridge to this interface
primary: true
-
type: vlan
vlan_id: {get_param: InternalApiNetworkVlanID}
addresses:
-
ip_netmask: {get_param: InternalApiIpSubnet}
-
type: vlan
vlan_id: {get_param: TenantNetworkVlanID}
addresses:
-
ip_netmask: {get_param: TenantIpSubnet}
-
type: ovs_bridge
name: br-ex
use_dhcp: false
dns_servers: {get_param: DnsServers}
members:
-
type: interface
name: nic3
# force the MAC address of the bridge to this interface
-
type: vlan
vlan_id: {get_param: ExternalNetworkVlanID}
addresses:
-
ip_netmask: {get_param: ExternalIpSubnet}
routes:
-
default: true
next_hop: {get_param: ExternalInterfaceDefaultRoute}
outputs:
OS::stack_id:
description: The OsNetConfigImpl resource.
value: {get_resource: OsNetConfigImpl}
6.1.5.5. compute.yaml 文件 复制链接链接已复制到粘贴板!
该文件位于 /home/stack/baremetal-vlan/nic-configs/ 目录中。计算配置中的大多数选项都与 Controller 配置中相同。在本例中,nic3 低于 br-ex 以用于外部连接(浮动 IP 网络 )
heat_template_version: pike
description: >
Software Config to drive os-net-config to configure VLANs for the
Compute role.
parameters:
ControlPlaneIp:
default: ''
description: IP address/subnet on the ctlplane network
type: string
ExternalIpSubnet:
default: ''
description: IP address/subnet on the external network
type: string
InternalApiIpSubnet:
default: ''
description: IP address/subnet on the internal API network
type: string
TenantIpSubnet:
default: ''
description: IP address/subnet on the tenant network
type: string
ManagementIpSubnet: # Only populated when including environments/network-management.yaml
default: ''
description: IP address/subnet on the management network
type: string
InternalApiNetworkVlanID:
default: ''
description: Vlan ID for the internal_api network traffic.
type: number
TenantNetworkVlanID:
default: ''
description: Vlan ID for the tenant network traffic.
type: number
ManagementNetworkVlanID:
default: 23
description: Vlan ID for the management network traffic.
type: number
StorageIpSubnet:
default: ''
description: IP address/subnet on the storage network
type: string
StorageMgmtIpSubnet:
default: ''
description: IP address/subnet on the storage mgmt network
type: string
ControlPlaneSubnetCidr: # Override this with parameter_defaults
default: '24'
description: The subnet CIDR of the control plane network.
type: string
ControlPlaneDefaultRoute: # Override this with parameter_defaults
description: The default route of the control plane network.
type: string
DnsServers: # Override this with parameter_defaults
default: []
description: A list of DNS servers (2 max for some implementations) that will be added to resolv.conf.
type: comma_delimited_list
EC2MetadataIp: # Override this with parameter_defaults
description: The IP address of the EC2 metadata server.
type: string
ExternalInterfaceDefaultRoute:
default: ''
description: default route for the external network
type: string
resources:
OsNetConfigImpl:
type: OS::Heat::StructuredConfig
properties:
group: os-apply-config
config:
os_net_config:
network_config:
-
type: ovs_bridge
name: br-isolated
use_dhcp: false
dns_servers: {get_param: DnsServers}
addresses:
-
ip_netmask:
list_join:
- '/'
- - {get_param: ControlPlaneIp}
- {get_param: ControlPlaneSubnetCidr}
routes:
-
ip_netmask: 169.254.169.254/32
next_hop: {get_param: EC2MetadataIp}
-
next_hop: {get_param: ControlPlaneDefaultRoute}
members:
-
type: interface
name: nic2
# force the MAC address of the bridge to this interface
primary: true
-
type: vlan
vlan_id: {get_param: InternalApiNetworkVlanID}
addresses:
-
ip_netmask: {get_param: InternalApiIpSubnet}
-
type: vlan
vlan_id: {get_param: TenantNetworkVlanID}
addresses:
-
ip_netmask: {get_param: TenantIpSubnet}
-
type: ovs_bridge
name: br-ex
use_dhcp: false
members:
-
type: interface
name: nic3
outputs:
OS::stack_id:
description: The OsNetConfigImpl resource.
value: {get_resource: OsNetConfigImpl}
6.1.6. 此情境中使用的 Red Hat OpenStack Platform director 配置文件 复制链接链接已复制到粘贴板!
6.1.6.1. neutron.conf 文件 复制链接链接已复制到粘贴板!
此文件位于 /etc/neutron/ 目录中,它应包含以下信息:
[DEFAULT]
service_plugins=odl-router_v2,trunk
6.1.6.2. ml2_conf.ini 文件 复制链接链接已复制到粘贴板!
此文件位于 /etc/neutron/plugins/ml2/ 目录中,它应包含以下信息:
[ml2]
type_drivers = vxlan,vlan,flat,gre
tenant_network_types = vxlan
mechanism_drivers = opendaylight_v2
[ml2_type_vlan]
network_vlan_ranges = datacentre:412:412
[ml2_odl]
password = admin
username = admin
url = http://172.17.1.18:8081/controller/nb/v2/neutron
- 在 [ml2] 部分下,注意 VXLAN 用作网络类型,因此是 opendaylight_v2 机制驱动程序。
- 在 [ml2_type_vlan] 下,应当设置与 network-environment.yaml 文件中配置相同的映射。
- 在 [ml2_odl] 下,您应看到访问 OpenDaylightController 的配置。
您可以使用这些详情确认对 OpenDaylight Controller 的访问:
$ curl -H "Content-Type:application/json" -u admin:admin http://172.17.1.18:8081/controller/nb/v2/neutron/networks