Chapter 6. Deployment examples
6.1. Model installation scenario using tenant network
In this section you explore an example of OpenDaylight installation using OpenStack in a production environment. This scenario uses tunneling (VXLAN) for tenant traffic separation.
6.1.1. Physical Topology
The topology of this scenario consists of six nodes:
- 1 x director undercloud node
- 3 x OpenStack overcloud controllers with the OpenDaylight SDN controller installed in addition to other OpenStack services
- 2 x OpenStack overcloud Compute nodes
6.1.2. Planning Physical Network Environment
The overcloud Controller nodes use three network interface cards (NICs) each:
Name | Purpose |
---|---|
nic1 | Management network (e.g accessing the node through SSH) |
nic2 | Tenant (VXLAN) carrier, provisioning (PXE, DHCP), and Internal API networks |
nic3 | Public API network access |
The overcloud Compute nodes are equipped with three NICs:
Name | Purpose |
---|---|
nic1 | Management network |
nic2 | Tenant carrier, provisioning, and Internal API networks |
nic3 | External (Floating IPs) network |
The undercloud node is equipped with two NICs:
Name | Purpose |
---|---|
nic1 | Used for the Management network |
nic2 | Used for the Provisioning network |
6.1.3. Planning NIC Connectivity
In this case, the environment files use abstracted numbered interfaces (nic1, nic2) and not the actual device names presented on the host operating system (like eth0 or eno2). The hosts that belong to the same role do not require identical network interface device names. There is no problem if one host uses the em1 and em2 interfaces, while the other uses eno1 and eno2. Each of the NIC is referred to as nic1 and nic2.
The abstracted NIC scheme relies only on interfaces that are live and connected. In cases where the hosts have a different number of interfaces, it is sufficient to use the minimal number of interfaces that you need to connect the hosts. For example, if there are four physical interfaces on one host and six on the other, you should only use nic1, nic2, nic3, and nic4 and plug in four cables on both hosts.
6.1.4. Planning Networks, VLANs and IPs
This scenario uses network isolation to separate the Management, Provisioning, Internal API, Tenant, Public API, and Floating IPs network traffic. This graphic is an example network configuration. It shows custom role deployment. If required, you can also include OpenDaylight in the Red Hat OpenStack Platform conroller. This is the default setup. In this scheme IPMI network, NICs and routing are not shown. You might need additional networks depending on the OpenStack configuration.
Figure 6.1. Detailed network topology used in this scenario
.png)
The table shows the VLAN ID and IP subnet associated with each network:
Network | VLAN ID | IP Subnet |
---|---|---|
Provisioning | Native | 192.0.5.0/24 |
Internal API | 600 | 172.17.0.0/24 |
Tenant | 603 | 172.16.0.0/24 |
Public API | 411 | 10.35.184.144/28 |
Floating IP | 412 | 10.35.186.146/28 |
OpenStack Platform director creates the br-isolated
OVS bridge and adds the VLAN interfaces for each network as defined in the network configurations files. The director also creates the br-ex bridge with the relevant network interface attached to it.
Ensure that your physical network switches that provide connectivity between the hosts are properly configured to carry those VLAN IDs. You must configure all switch ports facing the hosts as "trunks" with the VLANs. The term "trunk" is used here to describe a port that allows multiple VLAN IDs to traverse through the same port.
Configuration guidance for the physical switches is outside the scope of this document.
You can use the TenantNetworkVlanID attribute in the network-environment.yaml
file to define a VLAN tag for the Tenant network when using VXLAN tunneling. For example, VXLAN tenant traffic transported over a VLAN tagged underlay network. This value can also be empty if the Tenant network is desired to run over the native VLAN. Also note that when using VLAN tenant type networks, VLAN tags other than the value provided for TenantNetworkVlanID can be used.
6.1.5. OpenDaylight configuration files used in this scenario
To deploy this scenario of OpenStack and OpenDaylight, the following deployment command was entered on the undercloud node:
$ openstack overcloud deploy --debug \ --templates \ --environment-file "$HOME/extra_env.yaml" \ --libvirt-type kvm \ -e /home/stack/baremetal-vlan/network-environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-opendaylight.yaml \ --log-file overcloud_install.log &> overcloud_install.log
Further, this guide will show the configuration files used in this scenario, their content, and it will also provide explanation on the setting used.
6.1.5.1. The extra_env.yaml
file.
The file has only one parameter.
parameter_defaults: OpenDaylightProviderMappings: 'datacentre:br-ex,tenant:br-isolated'
These are the mappings that each node, controlled by OpenDaylight, will use. The physical network datacenter
will be mapped to the br-ex
OVS bridge and the tenant network traffic will be mapped to the br-isolated
OVS bridge.
6.1.5.2. The undercloud.conf
file
This file is located in the /home/stack/baremetal-vlan/
directory.
The file path points to customized versions of the configuration files.
[DEFAULT] local_ip = 192.0.5.1/24 network_gateway = 192.0.5.1 undercloud_public_vip = 192.0.5.2 undercloud_admin_vip = 192.0.5.3 local_interface = eno2 network_cidr = 192.0.5.0/24 masquerade_network = 192.0.5.0/24 dhcp_start = 192.0.5.5 dhcp_end = 192.0.5.24 inspection_iprange = 192.0.5.100,192.0.5.120
In this example, the 192.0.5.0/24 subnet for the Provisioning network is used. Note that the physical interface eno2 is used on the undercloud node for provisioning.
6.1.5.3. The network-environment.yaml
file
This is the main file for configuring the network. It is located in the /home/stack/baremetal-vlan/
directory. In the following file, the VLAN IDs and IP subnets are specified for the different networks, as well as the provider mappings. The two files in the nic-configs directory controller.yaml
and compute.yaml
are used for specifying the network configuration for the Controller and Compute nodes.
The number of Controller nodes (3) and Compute nodes (2) is specified in the example.
resource_registry: # Specify the relative/absolute path to the config files you want to use for # override the default. OS1::TripleO::Compute::Net::SoftwareConfig: nic-configs/compute.yaml OS::TripleO::Controller::Net::SoftwareConfig: nic-configs/controller.yaml # Network isolation configuration # Service section # If some service should be disable, use the following example # OS::TripleO::Network::Management: OS::Heat::None OS::TripleO::Network::External: /usr/share/openstack-tripleo-heat-templates/network/external.yaml OS::TripleO::Network::InternalApi: /usr/share/openstack-tripleo-heat-templates/network/internal_api.yaml OS::TripleO::Network::Tenant: /usr/share/openstack-tripleo-heat-templates/network/tenant.yaml OS::TripleO::Network::Management: OS::Heat::None OS::TripleO::Network::StorageMgmt: OS::Heat::None OS::TripleO::Network::Storage: OS::Heat::None # Port assignments for the VIP addresses OS::TripleO::Network::Ports::ExternalVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/external.yaml OS::TripleO::Network::Ports::InternalApiVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml OS::TripleO::Network::Ports::RedisVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/vip.yaml OS::TripleO::Network::Ports::StorageVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml OS::TripleO::Network::Ports::StorageMgmtVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml # Port assignments for the controller role OS::TripleO::Controller::Ports::ExternalPort: /usr/share/openstack-tripleo-heat-templates/network/ports/external.yaml OS::TripleO::Controller::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml OS::TripleO::Controller::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/tenant.yaml OS::TripleO::Controller::Ports::ManagementPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml OS::TripleO::Controller::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml OS::TripleO::Controller::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml # Port assignments for the Compute role OS::TripleO::Compute::Ports::ExternalPort: /usr/share/openstack-tripleo-heat-templates/network/ports/external.yaml OS::TripleO::Compute::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml OS::TripleO::Compute::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/tenant.yaml OS::TripleO::Compute::Ports::ManagementPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml OS::TripleO::Compute::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml OS::TripleO::Compute::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml # Port assignments for service virtual IP addresses for the controller role OS::TripleO::Controller::Ports::RedisVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/vip.yaml parameter_defaults: # Customize all these values to match the local environment InternalApiNetCidr: 172.17.0.0/24 TenantNetCidr: 172.16.0.0/24 ExternalNetCidr: 10.35.184.144/28 # CIDR subnet mask length for provisioning network ControlPlaneSubnetCidr: '24' InternalApiAllocationPools: [{'start': '172.17.0.10', 'end': '172.17.0.200'}] TenantAllocationPools: [{'start': '172.16.0.100', 'end': '172.16.0.200'}] # Use an External allocation pool which will leave room for floating IP addresses ExternalAllocationPools: [{'start': '10.35.184.146', 'end': '10.35.184.157'}] # Set to the router gateway on the external network ExternalInterfaceDefaultRoute: 10.35.184.158 # Gateway router for the provisioning network (or Undercloud IP) ControlPlaneDefaultRoute: 192.0.5.254 # Generally the IP of the Undercloud EC2MetadataIp: 192.0.5.1 InternalApiNetworkVlanID: 600 TenantNetworkVlanID: 603 ExternalNetworkVlanID: 411 # Define the DNS servers (maximum 2) for the overcloud nodes DnsServers: ["10.35.28.28","8.8.8.8"] # May set to br-ex if using floating IP addresses only on native VLAN on bridge br-ex NeutronExternalNetworkBridge: "''" # The tunnel type for the tenant network (vxlan or gre). Set to '' to disable tunneling. NeutronTunnelTypes: '' # The tenant network type for Neutron (vlan or vxlan). NeutronNetworkType: 'vxlan' # The OVS logical->physical bridge mappings to use. # NeutronBridgeMappings: 'datacentre:br-ex,tenant:br-isolated' # The Neutron ML2 and OpenVSwitch vlan mapping range to support. NeutronNetworkVLANRanges: 'datacentre:412:412' # Nova flavor to use. OvercloudControlFlavor: baremetal OvercloudComputeFlavor: baremetal # Number of nodes to deploy. ControllerCount: 3 ComputeCount: 2 # Sets overcloud nodes custom names # http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/node_placement.html#custom-hostnames ControllerHostnameFormat: 'controller-%index%' ComputeHostnameFormat: 'compute-%index%' CephStorageHostnameFormat: 'ceph-%index%' ObjectStorageHostnameFormat: 'swift-%index%'
6.1.5.4. The controller.yaml
file
The file is located in the /home/stack/baremetal-vlan/nic-configs/
directory. In this example, you are defining two switches: br-isolated
and br-ex
. nic2 will be under br-isolated
and nic3 under br-ex
:
heat_template_version: pike description: > Software Config to drive os-net-config to configure VLANs for the controller role. parameters: ControlPlaneIp: default: '' description: IP address/subnet on the ctlplane network type: string ExternalIpSubnet: default: '' description: IP address/subnet on the external network type: string InternalApiIpSubnet: default: '' description: IP address/subnet on the internal API network type: string StorageIpSubnet: default: '' description: IP address/subnet on the storage network type: string StorageMgmtIpSubnet: default: '' description: IP address/subnet on the storage mgmt network type: string TenantIpSubnet: default: '' description: IP address/subnet on the tenant network type: string ManagementIpSubnet: # Only populated when including environments/network-management.yaml default: '' description: IP address/subnet on the management network type: string ExternalNetworkVlanID: default: '' description: Vlan ID for the external network traffic. type: number InternalApiNetworkVlanID: default: '' description: Vlan ID for the internal_api network traffic. type: number TenantNetworkVlanID: default: '' description: Vlan ID for the tenant network traffic. type: number ManagementNetworkVlanID: default: 23 description: Vlan ID for the management network traffic. type: number ExternalInterfaceDefaultRoute: default: '' description: default route for the external network type: string ControlPlaneSubnetCidr: # Override this with parameter_defaults default: '24' description: The subnet CIDR of the control plane network. type: string DnsServers: # Override this with parameter_defaults default: [] description: A list of DNS servers (2 max for some implementations) that will be added to resolv.conf. type: comma_delimited_list EC2MetadataIp: # Override this with parameter_defaults description: The IP address of the EC2 metadata server. type: string resources: OsNetConfigImpl: type: OS::Heat::StructuredConfig properties: group: os-apply-config config: os_net_config: network_config: - type: ovs_bridge name: br-isolated use_dhcp: false dns_servers: {get_param: DnsServers} addresses: - ip_netmask: list_join: - '/' - - {get_param: ControlPlaneIp} - {get_param: ControlPlaneSubnetCidr} routes: - ip_netmask: 169.254.169.254/32 next_hop: {get_param: EC2MetadataIp} members: - type: interface name: nic2 # force the MAC address of the bridge to this interface primary: true - type: vlan vlan_id: {get_param: InternalApiNetworkVlanID} addresses: - ip_netmask: {get_param: InternalApiIpSubnet} - type: vlan vlan_id: {get_param: TenantNetworkVlanID} addresses: - ip_netmask: {get_param: TenantIpSubnet} - type: ovs_bridge name: br-ex use_dhcp: false dns_servers: {get_param: DnsServers} members: - type: interface name: nic3 # force the MAC address of the bridge to this interface - type: vlan vlan_id: {get_param: ExternalNetworkVlanID} addresses: - ip_netmask: {get_param: ExternalIpSubnet} routes: - default: true next_hop: {get_param: ExternalInterfaceDefaultRoute} outputs: OS::stack_id: description: The OsNetConfigImpl resource. value: {get_resource: OsNetConfigImpl}
6.1.5.5. The compute.yaml
file
The file is located in the /home/stack/baremetal-vlan/nic-configs/
directory. Most of the options in the Compute configuration are the same as in the Controller configuration. In this example, nic3 is under br-ex
to be used for External connectivity (Floating IP network )
heat_template_version: pike description: > Software Config to drive os-net-config to configure VLANs for the Compute role. parameters: ControlPlaneIp: default: '' description: IP address/subnet on the ctlplane network type: string ExternalIpSubnet: default: '' description: IP address/subnet on the external network type: string InternalApiIpSubnet: default: '' description: IP address/subnet on the internal API network type: string TenantIpSubnet: default: '' description: IP address/subnet on the tenant network type: string ManagementIpSubnet: # Only populated when including environments/network-management.yaml default: '' description: IP address/subnet on the management network type: string InternalApiNetworkVlanID: default: '' description: Vlan ID for the internal_api network traffic. type: number TenantNetworkVlanID: default: '' description: Vlan ID for the tenant network traffic. type: number ManagementNetworkVlanID: default: 23 description: Vlan ID for the management network traffic. type: number StorageIpSubnet: default: '' description: IP address/subnet on the storage network type: string StorageMgmtIpSubnet: default: '' description: IP address/subnet on the storage mgmt network type: string ControlPlaneSubnetCidr: # Override this with parameter_defaults default: '24' description: The subnet CIDR of the control plane network. type: string ControlPlaneDefaultRoute: # Override this with parameter_defaults description: The default route of the control plane network. type: string DnsServers: # Override this with parameter_defaults default: [] description: A list of DNS servers (2 max for some implementations) that will be added to resolv.conf. type: comma_delimited_list EC2MetadataIp: # Override this with parameter_defaults description: The IP address of the EC2 metadata server. type: string ExternalInterfaceDefaultRoute: default: '' description: default route for the external network type: string resources: OsNetConfigImpl: type: OS::Heat::StructuredConfig properties: group: os-apply-config config: os_net_config: network_config: - type: ovs_bridge name: br-isolated use_dhcp: false dns_servers: {get_param: DnsServers} addresses: - ip_netmask: list_join: - '/' - - {get_param: ControlPlaneIp} - {get_param: ControlPlaneSubnetCidr} routes: - ip_netmask: 169.254.169.254/32 next_hop: {get_param: EC2MetadataIp} - next_hop: {get_param: ControlPlaneDefaultRoute} members: - type: interface name: nic2 # force the MAC address of the bridge to this interface primary: true - type: vlan vlan_id: {get_param: InternalApiNetworkVlanID} addresses: - ip_netmask: {get_param: InternalApiIpSubnet} - type: vlan vlan_id: {get_param: TenantNetworkVlanID} addresses: - ip_netmask: {get_param: TenantIpSubnet} - type: ovs_bridge name: br-ex use_dhcp: false members: - type: interface name: nic3 outputs: OS::stack_id: description: The OsNetConfigImpl resource. value: {get_resource: OsNetConfigImpl}
6.1.6. Red Hat OpenStack Platform director configuration files used in this scenario
6.1.6.1. The neutron.conf
file
This file is located in the /etc/neutron/
directory and should contain the following information:
[DEFAULT] service_plugins=odl-router_v2,trunk
6.1.6.2. The ml2_conf.ini
file
This file is located in the /etc/neutron/plugins/ml2/
directory and should contain the following information:
[ml2] type_drivers = vxlan,vlan,flat,gre tenant_network_types = vxlan mechanism_drivers = opendaylight_v2 [ml2_type_vlan] network_vlan_ranges = datacentre:412:412 [ml2_odl] password = admin username = admin url = http://172.17.1.18:8081/controller/nb/v2/neutron
- Under the [ml2] section note that VXLAN is used as the networks’ type and so is the opendaylight_v2 mechanism driver.
- Under [ml2_type_vlan], the same mappings as configured in network-environment.yaml file, should be set.
- Under [ml2_odl], you should see the configuration accessing the OpenDaylightController.
You can use these details to confirm access to the OpenDaylight Controller:
$ curl -H "Content-Type:application/json" -u admin:admin http://172.17.1.18:8081/controller/nb/v2/neutron/networks
6.2. Model installation scenario using provider networks
This installation scenario shows an example of OpenStack and OpenDaylight using provider networks instead of tenant networks. An external neutron provider network bridges VM instances to a physical network infrastructure that provides Layer-3 (L3) and other network services. In most cases, provider networks implement Layer-2 (L2) segmentation using the VLAN IDs. A provider network maps to a provider bridge on each Compute node that supports launching VM instances on the provider network.
6.2.1. Physical Topology
The topology of this scenario consists of six nodes:
- 1 x director undercloud node
- 3 x OpenStack overcloud controllers with the OpenDaylight SDN controller installed in addition to other OpenStack services
- 2 x OpenStack overcloud Compute nodes
6.2.2. Planning Physical Network Environment
The overcloud Controller nodes use four network interface cards (NICs) each:
Name | Purpose |
---|---|
nic1 | Management network (e.g accessing the node through SSH) |
nic2 | Provisioning (PXE, DHCP), and Internal API networks |
nic3 | Tenant network |
nic4 | Public API network, Floating IP network |
The overcloud Compute nodes are equipped with four NICs:
Name | Purpose |
---|---|
nic1 | Management network |
nic2 | Provisioning, and Internal API networks |
nic3 | Tenant network |
nic4 | Floating IP network |
The undercloud node is equipped with two NICs:
Name | Purpose |
---|---|
nic1 | Used for the Management network |
nic2 | Used for the Provisioning network |
6.2.3. Planning NIC Connectivity
In this case, the environment files use abstracted numbered interfaces (nic1, nic2) and not the actual device names presented on the host operating system, for example, eth0 or eno2. The hosts that belong to the same role do not require identical network interface device names. There is no problem if one host uses the em1 and em2 interfaces, while the other uses eno1 and eno2. Each of the NIC will be referred to as nic1 and nic2.
The abstracted NIC scheme relies only on interfaces that are live and connected. In cases where the hosts have a different number of interfaces, it is sufficient to use the minimal number of interfaces that you need to connect the hosts. For example, if there are four physical interfaces on one host and six on the other, you should only use nic1, nic2, nic3, and nic4 and plug in four cables on both hosts.
6.2.4. Planning Networks, VLANs and IPs
This scenario uses network isolation to separate the Management, Provisioning, Internal API, Tenant, Public API, and Floating IPs network traffic.
Figure 6.2. Detailed network topology used in this scenario

The table shows the VLAN ID and IP subnet associated with each network:
Network | VLAN ID | IP Subnet |
---|---|---|
Provisioning | Native | 192.0.5.0/24 |
Internal API | 600 | 172.17.0.0/24 |
Tenant | 554,555-601 | 172.16.0.0/24 |
Public API | 552 | 192.168.210.0/24 |
Floating IP | 553 | 10.35.186.146/28 |
The OpenStack Platform director creates the br-isolated OVS bridge and adds the VLAN interfaces for each network as defined in the network configurations files. The director also creates the br-ex
bridge automatically with the relevant network interface attached to it.
Ensure that the physical network switches that provide connectivity between the hosts are properly configured to carry those VLAN IDs. You must configure all switch ports facing the hosts as trunks with the VLANs. The term "trunk" is used here to describe a port that allows multiple VLAN IDs to traverse through the same port.
Configuration guidance for the physical switches is outside the scope of this document.
The TenantNetworkVlanID in network-environment.yaml
is where a VLAN tag can be defined for Tenant network when using VXLAN tunneling (i.e VXLAN tenant traffic transported over a VLAN tagged underlay network). This value may also be empty if the Tenant network is desired to run over the native VLAN. Also note, that when using VLAN tenant type networks, VLAN tags other than the value provided for TenantNetworkVlanID may be used.
6.2.5. OpenDaylight configuration files used in this scenario
To deploy this scenario of OpenStack and OpenDaylight, the following deployment command was entered on the undercloud node:
$ openstack overcloud deploy --debug \ --templates \ --environment-file "$HOME/extra_env.yaml" \ --libvirt-type kvm \ -e /home/stack/baremetal-vlan/network-environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-opendaylight.yaml \ --log-file overcloud_install.log &> overcloud_install.log
This guide also shows the configuration files for this scenario, the configuration file content, and explanatory information about the configuration.
6.2.5.1. extra_env.yaml
file.
The file has only one parameter.
parameter_defaults: OpenDaylightProviderMappings: 'datacentre:br-ex,tenant:br-vlan'
These are the mappings that each node, controlled by OpenDaylight, will use. The physical network datacenter
is mapped to the br-ex
OVS bridge, and the tenant network traffic is mapped to the br-vlan
OVS bridge.
6.2.5.2. undercloud.conf
file
This file is in the /home/stack/
directory.
The file path points to customized versions of the configuration files.
[DEFAULT] local_ip = 192.0.5.1/24 network_gateway = 192.0.5.1 undercloud_public_vip = 192.0.5.2 undercloud_admin_vip = 192.0.5.3 local_interface = eno2 network_cidr = 192.0.5.0/24 masquerade_network = 192.0.5.0/24 dhcp_start = 192.0.5.5 dhcp_end = 192.0.5.24 inspection_iprange = 192.0.5.100,192.0.5.120
This example uses the the 192.0.5.0/24 subnet for the Provisioning network. Note that the physical interface eno2 is used on the undercloud node for provisioning.
6.2.5.3. network-environment.yaml
file
This is the main file for configuring the network. It is located in the /home/stack/baremetal-vlan/
directory. The following file specifies the VLAN IDs and IP subnets for the different networks, and shows the provider mappings. The controller.yaml
and compute.yaml
files in the nic-configs
directory are used to specify the network configuration for the Controller and Compute nodes.
The number of Controller nodes (3) and Compute nodes (2) is specified in the example.
resource_registry: # Specify the relative/absolute path to the config files you want to use for override the default. OS::TripleO::Compute::Net::SoftwareConfig: nic-configs/compute.yaml OS::TripleO::Controller::Net::SoftwareConfig: nic-configs/controller.yaml # Network isolation configuration # Service section # If some service should be disabled, use the following example # OS::TripleO::Network::Management: OS::Heat::None OS::TripleO::Network::External: /usr/share/openstack-tripleo-heat-templates/network/external.yaml OS::TripleO::Network::InternalApi: /usr/share/openstack-tripleo-heat-templates/network/internal_api.yaml OS::TripleO::Network::Tenant: /usr/share/openstack-tripleo-heat-templates/network/tenant.yaml OS::TripleO::Network::Management: OS::Heat::None OS::TripleO::Network::StorageMgmt: OS::Heat::None OS::TripleO::Network::Storage: OS::Heat::None # Port assignments for the VIPs OS::TripleO::Network::Ports::ExternalVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/external.yaml OS::TripleO::Network::Ports::InternalApiVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml OS::TripleO::Network::Ports::RedisVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/vip.yaml OS::TripleO::Network::Ports::StorageVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml OS::TripleO::Network::Ports::StorageMgmtVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml # Port assignments for the controller role OS::TripleO::Controller::Ports::ExternalPort: /usr/share/openstack-tripleo-heat-templates/network/ports/external.yaml OS::TripleO::Controller::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml OS::TripleO::Controller::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/tenant.yaml OS::TripleO::Controller::Ports::ManagementPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml OS::TripleO::Controller::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml OS::TripleO::Controller::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml # Port assignments for the compute role OS::TripleO::Compute::Ports::ExternalPort: /usr/share/openstack-tripleo-heat-templates/network/ports/external.yaml OS::TripleO::Compute::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml OS::TripleO::Compute::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/tenant.yaml OS::TripleO::Compute::Ports::ManagementPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml OS::TripleO::Compute::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml OS::TripleO::Compute::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml # Port assignments for service virtual IPs for the controller role OS::TripleO::Controller::Ports::RedisVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/vip.yaml OS::TripleO::NodeUserData: /home/stack/baremetal-vlan/firstboot-config.yaml parameter_defaults: # Customize all these values to match the local environment InternalApiNetCidr: 172.17.0.0/24 TenantNetCidr: 172.16.0.0/24 ExternalNetCidr: 192.168.210.0/24 # CIDR subnet mask length for provisioning network ControlPlaneSubnetCidr: '24' InternalApiAllocationPools: [{'start': '172.17.0.10', 'end': '172.17.0.200'}] TenantAllocationPools: [{'start': '172.16.0.100', 'end': '172.16.0.200'}] # Use an External allocation pool which will leave room for floating IPs ExternalAllocationPools: [{'start': '192.168.210.2', 'end': '192.168.210.12'}] # Set to the router gateway on the external network ExternalInterfaceDefaultRoute: 192.168.210.1 # Gateway router for the provisioning network (or Undercloud IP) ControlPlaneDefaultRoute: 192.0.5.1 # Generally the IP of the Undercloud EC2MetadataIp: 192.0.5.1 InternalApiNetworkVlanID: 600 TenantNetworkVlanID: 554 ExternalNetworkVlanID: 552 # Define the DNS servers (maximum 2) for the overcloud nodes DnsServers: ["10.35.28.28","8.8.8.8"] # May set to br-ex if using floating IPs only on native VLAN on bridge br-ex NeutronExternalNetworkBridge: "''" # The tunnel type for the tenant network (vxlan or gre). Set to '' to disable tunneling. NeutronTunnelTypes: '' # The tenant network type for Neutron (vlan or vxlan). NeutronNetworkType: 'vlan' # The OVS logical->physical bridge mappings to use. # NeutronBridgeMappings: 'datacentre:br-ex,tenant:br-isolated' # The Neutron ML2 and OpenVSwitch vlan mapping range to support. NeutronNetworkVLANRanges: 'datacentre:552:553,tenant:555:601' # Nova flavor to use. OvercloudControlFlavor: baremetal OvercloudComputeFlavor: baremetal # Number of nodes to deploy. ControllerCount: 3 ComputeCount: 2 # Sets overcloud nodes custom names # http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/node_placement.html#custom-hostnames ControllerHostnameFormat: 'controller-%index%' ComputeHostnameFormat: 'compute-%index%' CephStorageHostnameFormat: 'ceph-%index%' ObjectStorageHostnameFormat: 'swift-%index%'
6.2.5.4. controller.yaml
file
This file is in the /home/stack/baremetal-vlan/nic-configs/
directory. This example defines the following switches: br-isolated
, br-vlan
, and br-ex
. nic2 is under br-isolated
and nic3 is under br-ex
:
heat_template_version: pike description: > Software Config to drive os-net-config to configure VLANs for the controller role. parameters: ControlPlaneIp: default: '' description: IP address/subnet on the ctlplane network type: string ExternalIpSubnet: default: '' description: IP address/subnet on the external network type: string InternalApiIpSubnet: default: '' description: IP address/subnet on the internal API network type: string StorageIpSubnet: default: '' description: IP address/subnet on the storage network type: string StorageMgmtIpSubnet: default: '' description: IP address/subnet on the storage mgmt network type: string TenantIpSubnet: default: '' description: IP address/subnet on the tenant network type: string ManagementIpSubnet: # Only populated when including environments/network-management.yaml default: '' description: IP address/subnet on the management network type: string ExternalNetworkVlanID: default: '' description: Vlan ID for the external network traffic. type: number InternalApiNetworkVlanID: default: '' description: Vlan ID for the internal_api network traffic. type: number TenantNetworkVlanID: default: '' description: Vlan ID for the tenant network traffic. type: number ManagementNetworkVlanID: default: 23 description: Vlan ID for the management network traffic. type: number ExternalInterfaceDefaultRoute: default: '' description: default route for the external network type: string ControlPlaneSubnetCidr: # Override this with parameter_defaults default: '24' description: The subnet CIDR of the control plane network. type: string DnsServers: # Override this with parameter_defaults default: [] description: A list of DNS servers (2 max for some implementations) that will be added to resolv.conf. type: comma_delimited_list EC2MetadataIp: # Override this with parameter_defaults description: The IP address of the EC2 metadata server. type: string resources: OsNetConfigImpl: type: OS::Heat::StructuredConfig properties: group: os-apply-config config: os_net_config: network_config: - type: interface name: nic1 use_dhcp: false - type: ovs_bridge name: br-isolated use_dhcp: false dns_servers: {get_param: DnsServers} addresses: - ip_netmask: list_join: - '/' - - {get_param: ControlPlaneIp} - {get_param: ControlPlaneSubnetCidr} routes: - ip_netmask: 169.254.169.254/32 next_hop: {get_param: EC2MetadataIp} members: - type: interface name: nic2 # force the MAC address of the bridge to this interface primary: true - type: vlan vlan_id: {get_param: InternalApiNetworkVlanID} addresses: - ip_netmask: {get_param: InternalApiIpSubnet} - type: ovs_bridge name: br-ex use_dhcp: false dns_servers: {get_param: DnsServers} members: - type: interface name: nic4 # force the MAC address of the bridge to this interface - type: vlan vlan_id: {get_param: ExternalNetworkVlanID} addresses: - ip_netmask: {get_param: ExternalIpSubnet} routes: - default: true next_hop: {get_param: ExternalInterfaceDefaultRoute} - type: ovs_bridge name: br-vlan use_dhcp: false dns_servers: {get_param: DnsServers} members: - type: interface name: nic3 - type: vlan vlan_id: {get_param: TenantNetworkVlanID} addresses: - ip_netmask: {get_param: TenantIpSubnet} outputs: OS::stack_id: description: The OsNetConfigImpl resource. value: {get_resource: OsNetConfigImpl}
6.2.5.5. compute.yaml
file
This file is in the /home/stack/baremetal-vlan/nic-configs/
directory. Most of the options in the Compute configuration are the same as in the Controller configuration. In this example, nic4 is under br-ex
to be used for External connectivity (Floating IP network )
heat_template_version: pike description: > Software Config to drive os-net-config to configure VLANs for the compute role. parameters: ControlPlaneIp: default: '' description: IP address/subnet on the ctlplane network type: string ExternalIpSubnet: default: '' description: IP address/subnet on the external network type: string InternalApiIpSubnet: default: '' description: IP address/subnet on the internal API network type: string TenantIpSubnet: default: '' description: IP address/subnet on the tenant network type: string ManagementIpSubnet: # Only populated when including environments/network-management.yaml default: '' description: IP address/subnet on the management network type: string InternalApiNetworkVlanID: default: '' description: Vlan ID for the internal_api network traffic. type: number TenantNetworkVlanID: default: '' description: Vlan ID for the tenant network traffic. type: number ManagementNetworkVlanID: default: 23 description: Vlan ID for the management network traffic. type: number StorageIpSubnet: default: '' description: IP address/subnet on the storage network type: string StorageMgmtIpSubnet: default: '' description: IP address/subnet on the storage mgmt network type: string ControlPlaneSubnetCidr: # Override this with parameter_defaults default: '24' description: The subnet CIDR of the control plane network. type: string ControlPlaneDefaultRoute: # Override this with parameter_defaults description: The default route of the control plane network. type: string DnsServers: # Override this with parameter_defaults default: [] description: A list of DNS servers (2 max for some implementations) that will be added to resolv.conf. type: comma_delimited_list EC2MetadataIp: # Override this with parameter_defaults description: The IP address of the EC2 metadata server. type: string ExternalInterfaceDefaultRoute: default: '' description: default route for the external network type: string resources: OsNetConfigImpl: type: OS::Heat::StructuredConfig properties: group: os-apply-config config: os_net_config: network_config: - type: interface name: nic1 use_dhcp: false - type: ovs_bridge name: br-isolated use_dhcp: false dns_servers: {get_param: DnsServers} addresses: - ip_netmask: list_join: - '/' - - {get_param: ControlPlaneIp} - {get_param: ControlPlaneSubnetCidr} routes: - ip_netmask: 169.254.169.254/32 next_hop: {get_param: EC2MetadataIp} - next_hop: {get_param: ControlPlaneDefaultRoute} default: true members: - type: interface name: nic2 # force the MAC address of the bridge to this interface primary: true - type: vlan vlan_id: {get_param: InternalApiNetworkVlanID} addresses: - ip_netmask: {get_param: InternalApiIpSubnet} - type: ovs_bridge name: br-ex use_dhcp: false members: - type: interface name: nic4 - type: ovs_bridge name: br-vlan use_dhcp: false dns_servers: {get_param: DnsServers} members: - type: interface name: nic3 - type: vlan vlan_id: {get_param: TenantNetworkVlanID} addresses: - ip_netmask: {get_param: TenantIpSubnet} outputs: OS::stack_id: description: The OsNetConfigImpl resource. value: {get_resource: OsNetConfigImpl}
6.2.6. Red Hat OpenStack Platform director configuration files used in this scenario
6.2.6.1. neutron.conf
file
This file is in the /etc/neutron/
directory and contains the following information:
[DEFAULT] service_plugins=odl-router_v2,trunk
6.2.6.2. ml2_conf.ini
file
This file is in the /etc/neutron/plugins/ml2/
directory and contains the following information:
[DEFAULT] [ml2] type_drivers = vxlan,vlan,flat,gre tenant_network_types = vlan mechanism_drivers = opendaylight_v2 extension_drivers = qos,port_security path_mtu = 0 [ml2_type_flat] flat_networks = datacentre [ml2_type_geneve] [ml2_type_gre] tunnel_id_ranges = 1:4094 [ml2_type_vlan] network_vlan_ranges = datacentre:552:553,tenant:555:601 [ml2_type_vxlan] vni_ranges = 1:4094 vxlan_group = 224.0.0.1 [securitygroup] [ml2_odl] password=<PASSWORD> username=<USER> url=http://172.17.0.10:8081/controller/nb/v2/neutron
-
Under the [ml2] section note that VXLAN is used as the networks’ type and so is the
opendaylight_v2
mechanism driver. -
Under [ml2_type_vlan], set the same mappings as in the
network-environment.yaml
file. - Under [ml2_odl], you should see the configuration accessing the OpenDaylightController.
You can use these details to confirm access to the OpenDaylight Controller:
$ curl -H "Content-Type:application/json" -u admin:admin http://172.17.1.18:8081/controller/nb/v2/neutron/networks