Este conteúdo não está disponível no idioma selecionado.
Chapter 7. Model installation scenario
In this part you will explore an example of OpenDaylight installation using OpenStack in a production environment. In this scenario, tunneling (VXLAN) is used for tenant traffic separation.
7.1. Physical Topology Copiar o linkLink copiado para a área de transferência!
The topology of this scenario consists of six nodes:
- 1 x director undercloud node
- 3 x OpenStack overcloud controllers; the first one has the OpenDaylight SDN controller installed in addition to other OpenStack services
- 2 x OpenStack overcloud Compute nodes
7.2. Planning Physical Network Environment Copiar o linkLink copiado para a área de transferência!
The overcloud controller nodes use three network interface cards (NICs) each:
Name | Purpose |
---|---|
nic1 | Management network (e.g accessing the node via SSH) |
nic2 | Tenant (VXLAN) carrier, provisioning (PXE, DHCP), and Internal API networks |
nic3 | Public API network access |
The overcloud Compute nodes are equipped with three NICs:
Name | Purpose |
---|---|
nic1 | Management network |
nic2 | Tenant carrier, provisioning, and Internal API networks |
nic3 | External (Floating IPs) network |
The undercloud node is equipped with two NICs:
Name | Purpose |
---|---|
nic1 | Used for the Management network |
nic2 | Used for the Provisioning network |
7.3. Planning NIC Connectivity Copiar o linkLink copiado para a área de transferência!
In this case, the environment files use abstracted numbered interfaces (nic1, nic2) and not the actual device names presented on the host operating system (like eth0 or eno2). The hosts that belong to the same role do not require identical network interface device names. There is no problem if one host uses the em1 and em2 interfaces, while the other uses eno1 and eno2. All NICs values will be accessed as nic1 and nic2.
The abstracted NIC scheme only relies on interfaces that are alive and connected. In cases, when the hosts have a different number of interfaces, it is enough to use the minimal number of interfaces that you need to connect the hosts. For example, if there are four physical interfaces on one host and six on the other, you should only use nic1, nic2, nic3, and nic4 and plug in four cables on both hosts.
7.4. Planning Networks, VLANs and IPs Copiar o linkLink copiado para a área de transferência!
In this scenario, network isolation is used to separate the Management, Provisioning, Internal API, Tenant, Public API, and Floating IPs network traffic.
Figure 7.1. Detailed network topology used in this scenario
The table shows the VLAN ID and IP subnet associated with each network:
Network | VLAN ID | IP Subnet |
---|---|---|
Provisioning | Native | 192.0.5.0/24 |
Internal API | 600 | 172.17.0.0/24 |
Tenant | 603 | 172.16.0.0/24 |
Public API | 411 | 10.35.184.144/28 |
Floating IP | 412 | 10.35.186.146/28 |
The OpenStack Platform director creates the br-isolated OVS bridge and adds the VLAN interfaces for each network as defined in the network configurations files. The br-ex bridge, too, is created automatically by the director with the relevant network interface attached to it.
Make sure, that your physical network switches that provide connectivity between the hosts are properly configured to carry those VLAN IDs. You must configure all switch ports facing the hosts as "trunks" with the above mentioned VLANs. The term "trunk" is used here to describe a port that allows multiple VLAN IDs to traverse through the same port.
Configuration guidance for the physical switches is outside the scope of this document.
The TenantNetworkVlanID in network-environment.yaml
is where a VLAN tag can be defined for Tenant network when using VXLAN tunneling (i.e VXLAN tenant traffic transported over a VLAN tagged underlay network). This value may also be empty if the Tenant network is desired to run over the native VLAN. Also note, that when using VLAN tenant type networks, VLAN tags other than the value provided for TenantNetworkVlanID may be used.
7.5. OpenDaylight configuration files reference Copiar o linkLink copiado para a área de transferência!
To deploy the model installation of OpenStack, the following commands were entered on the undercloud node:
source ~/stackrc
$ source ~/stackrc
7.5.1. The extra_env.yaml file Copiar o linkLink copiado para a área de transferência!
This file has only one parameter:
parameter_defaults: OpenDaylightProviderMappings: 'datacentre:br-ex,tenant:br-isolated'
parameter_defaults:
OpenDaylightProviderMappings: 'datacentre:br-ex,tenant:br-isolated'
These are the mappings that each node, controlled by OpenDaylight, will use. The physical network data center will be mapped to the br-ex OVS bridge and tenant network traffic will be mapped to the br-isolated OVS bridge.
7.5.2. The undercloud.conf file Copiar o linkLink copiado para a área de transferência!
This file is located in the /home/stack/baremetal-vlan/
directory.
In this example, the 192.0.5.0/24 subnet for the Provisioning network is used. Note that the physical interface eno2 is used on the undercloud node for provisioning.
7.5.3. The network-environment.yaml file Copiar o linkLink copiado para a área de transferência!
This is the main file for configuring the network. It is located in the /home/stack/baremetal-vlan/
directory. In the following file the VLAN IDs and IP subnets are specified for the different networks, as well as the provider mappings.
Also note, the two files under nic-configs directory controller.yaml
and compute.yaml
are used for specifying the network configuration for the controller and Compute nodes.
The number of controller nodes (3) and Compute nodes (2) is specified in the example.
7.5.4. The controller.yaml file Copiar o linkLink copiado para a área de transferência!
The file is located in the /home/stack/baremetal-vlan/nic-configs/
directory.
In this example, you are defining two switches: br-isolated and br-ex. nic2 will be under br-isolated and nic3 under br-ex:
7.5.5. The compute.yaml file Copiar o linkLink copiado para a área de transferência!
The file is located in the /home/stack/baremetal-vlan/nic-configs/
directory.
Most of the options in the Compute configuration is the same as the controller. In this example, nic3
is under br-ex
to be used for External connectivity (Floating IP network )
7.6. Director configuration files references Copiar o linkLink copiado para a área de transferência!
7.6.1. The neutron.conf file Copiar o linkLink copiado para a área de transferência!
This file is located in the /etc/neutron/
directory and should contain the following information.
[DEFAULT] service_plugins=odl-router_v2
[DEFAULT]
service_plugins=odl-router_v2
7.6.2. The ml2_conf.ini file Copiar o linkLink copiado para a área de transferência!
This file is located in the /etc/neutron/plugins/ml2/
directory and should contain the following information:
-
Under the
[ml2]
section note that VXLAN is used as the networks’ type and so is the opendaylight_v2 mechanism driver. -
Under
[ml2_type_vlan]
, the same mappings as configured innetwork-environment.yaml
file, should be set. -
Under
[ml2_odl]
, you should see the configuration accessing theOpenDaylightController
.
You can use the above details to access to check that the access to OpenDaylight controller works:
curl -H "Content-Type:application/json" -u admin:admin http://172.17.1.18:8081/controller/nb/v2/neutron/networks
$ curl -H "Content-Type:application/json" -u admin:admin http://172.17.1.18:8081/controller/nb/v2/neutron/networks