이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 4. Configure and Deploy OpenDaylight with Red Hat OpenStack Platform 10
There are two methods of deploying OpenDaylight with Red Hat OpenStack Platform 10.
The first includes running OpenDaylight on the default Controller role, while the second isolates OpenDaylight on its own node using a custom role.
In Red Hat OpenStack Platform 10 only a single instance of OpenDaylight is supported running in either type of deployment. OpenDaylight HA (clustering) will be supported in a future release.
4.1. Configure and run the deployment
The recommended approach to installing OpenDaylight is to use the default environment file and pass it as an argument to the install command on the undercloud. This will deploy OpenDaylight using the default enviroment file neutron-opendaylight-l3.yaml
.
$ openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-opendaylight-l3.yaml
Useful information
The default file contains these values:
# A Heat environment that can be used to deploy OpenDaylight with L3 DVR # resource_registry: OS::TripleO::Services::NeutronOvsAgent: OS::Heat::None OS::TripleO::Services::ComputeNeutronOvsAgent: OS::Heat::None OS::TripleO::Services::ComputeNeutronCorePlugin: OS::Heat::None OS::TripleO::Services::OpenDaylightApi: ../puppet/services/opendaylight-api.yaml OS::TripleO::Services::OpenDaylightOvs: ../puppet/services/opendaylight-ovs.yaml OS::TripleO::Services::NeutronL3Agent: OS::Heat::None parameter_defaults: NeutronEnableForceMetadata: true NeutronMechanismDrivers: 'opendaylight_v2' NeutronServicePlugins: "odl-router_v2" OpenDaylightEnableL3: "'yes'"
-
In Red Hat OpenStack Platform director, the
resource_registry
is used to map resources for a deployment to the corresponding yaml resource definition file. Services are one type of resource that can be mapped. TheOS::Heat::None
option disables services that will not be used. In this example, OpenDaylightApi and OpenDaylightOvs services are enabled, while default neutron agents are explicitly disabled. -
The heat parameters are normally set inside the director. You can override their default values by using the
parameter_defaults
section of the environment file. In the above example, certain parameters are overridden to enable OpenDaylight Layer 3 functionality.
There is another default OpenDaylight environment file (neutron-opendaylight.yaml
) that only enables Layer 2 functionality. However, support for that configuration is deprecated and it should not be used.
The list of other services and their configuration options are provided further in the text.
4.1.1. Configure the OpenDaylight API Service
You can configure the OpenDaylight API Service by editing opendaylight-api.yaml
, located in the /usr/share/openstack-tripleo-heat-templates/puppet/services/
directory.
Configurable options
You can configure the following options:
OpenDaylightPort | Sets the port used for Northbound communication. Defaults to 8081. |
OpenDaylightUsername | Sets the login username for OpenDaylight. Defaults to admin. Overriding this parameter is currently unsupported. |
OpenDaylightPassword | Sets the login password for OpenDaylight. Defaults to admin. Overriding this parameter is currently unsupported. |
OpenDaylightEnableL3 | L3 DVR functionality cannot be disabled with ‘odl-netvirt-openstack’ OpenDaylight feature. Deprecated. |
OpenDaylightEnableDHCP | Enables OpenDaylight to act as the DHCP service. Defaults to false. Overriding this parameter is currently unsupported. |
OpenDaylightFeatures | Comma-delimited list of features to boot in OpenDaylight. Defaults to [odl-netvirt-openstack, odl-netvirt-ui]. When using OpenDaylight in the deprecated Layer 2 only mode, this parameter must be overridden to use the odl-ovsdb-openstack feature. |
4.1.2. Configure the OpenDaylight OVS Service
Configure the OpenDaylight OVS Service by changing the values in opendaylight-ovs.yaml
, located in /usr/share/openstack-tripleo-heat-templates/puppet/services/
.
Configurable options
You can configure the following:
OpenDaylightPort | Sets the port used for Northbound communication to OpenDaylight. Defaults to 8081. The OVS Service uses the Northbound to query OpenDaylight to ensure that it is fully up before connecting. |
OpenDaylightConnectionProtocol | Layer 7 protocol used for REST access. Defaults to http. Currently, http is the only supported protocol in OpenDaylight. |
OpenDaylightCheckURL | The URL to use to verify OpenDaylight is fully up before OVS connects. Defaults to restconf/operational/network-topology:network-topology/topology/netvirt:1 |
OpenDaylightProviderMappings | Comma-delimited list of mappings between logical networks and physical interfaces. This setting is required for VLAN deployments. Defaults to datacentre:br-ex. |
4.1.3. Using Neutron Metadata Service with OpenDaylight
The OpenStack Compute service allows virtual machines to query metadata associated with them by making a web request to a special address, 169.254.169.254. The OpenStack Networking proxies such requests to the nova-api, even when the requests come from isolated or multiple networks with overlapping IP addresses.
The Metadata service uses either the neutron L3 agent router to serve the metadata requests or the DHCP agent instance. Deploying OpenDaylight with the Layer 3 routing plug-in enabled disables the neutron L3 agent. Therefore Metadata must be configured to flow through the DHCP instance, even when a router exists in a tenant network. To configure this, set the NeutronEnableForceMetadata
to true
.
This functionality is already enabled in the default environment file neutron-opendaylight-l3.yaml.
VM instances will have a static host route installed, using the DHCP option 121, for 169.254.169.254/32. With this static route in place, Metadata requests to 169.254.169.254:80 will go to the Metadata nameserver proxy in the DHCP network namespace. The namespace proxy then adds the HTTP headers with the instance’s IP to the request, and connects it to the Metadata agent through the Unix domain socket. The Metadata agent queries neutron for the instance ID that corresponds to the source IP and the network ID and proxies it to the nova Metadata service. The additional HTTP headers are required to maintain isolation between tenants and allow overlapping IP support.
4.1.4. Configure Network and NIC template
In Red Hat OpenStack Platform director, the physical neutron network data center is mapped to an OVS bridge called br-ex by default. It is consistently the same with the OpenDaylight integration. If you use the default OpenDaylightProviderMappings and plan to create a flat or VLAN _External network, you have to configure the OVS br-ex bridge in the NIC template for Compute nodes. Since the Layer 3 plug-in uses distributed routing to these nodes, it is not necessary to configure br-ex on the controller role NIC template any more.
The br-ex bridge can be mapped to any network in network isolation, but it is typically mapped to the External network as you can see in the example.
type: ovs_bridge name: {get_input: bridge_name} use_dhcp: false members: - type: interface name: nic3 # force the MAC address of the bridge to this interface primary: true dns_servers: {get_param: DnsServers} addresses: - ip_netmask: {get_param: ExternalIpSubnet} routes: - default: true ip_netmask: 0.0.0.0/0 next_hop: {get_param: ExternalInterfaceDefaultRoute}
When using network isolation, you do not have to place an IP address, or a default route, on this bridge on Compute nodes.
Alternatively, it is possible to configure external network access without using the br-ex bridge at all. To use the method, you must know the interface name of the overcloud Compute node in advance. For example, if eth3 is the deterministic name of the third interface on the Compute node, then will specify an interface in the NIC template for the Compute node:
- type: interface name: eth3 use_dhcp: false
Having configured the NIC template, you must override the OpenDaylightProviderMappings parameter to map the interface to the physical neutron network: datacentre:eth3
. This will cause OpenDaylight to move the eth3 interface onto the br-int bridge.
The OVS br-int bridge is used to carry the tenant traffic. OpenDaylight supports VXLAN and VLAN type tenant networks. Red Hat OpenStack Platform director will automatically configure the OVS to use the correct IP interface for VXLAN tenant traffic when VXLAN tenant network types are used.
You do not need to create the br-int bridge in the NIC template files. OpenDaylight will create the bridge automatically. However, if you use VLAN tenant networks, you may wish to configure one more physical neutron network and include that interface mapping in the OpenDaylightProviderMappings.
OpenDaylight will then move that interface to the br-int bridge. Regarding such behavior, you must use different interfaces to use both the VXLAN and VLAN tenant networks. Another way is to use an extra bridge within the OpenDaylightProviderMappings for tenant networks.
For example, you could use tenant:br-isolated
, where br-isolated is an OVS bridge that contains the tenant network interface and is also configured with an IP. OpenDaylight will create a patch port from OVS bridge br-int to br-isolated. This way, you can use the br-isolated bridge for VXLAN traffic, as well as transporting the VLAN traffic.
4.2. Install OpenDaylight In Custom Role
Installing OpenDaylight in a custom role results in an isolated OpenDaylightApi
service, running on a node that is not the controller.
To use a custom role for OpenDaylight, follow this procedure:
Install OpenDaylight using a special role file.
Copy the existing
roles_data.yaml
file to a new file:$ cp /usr/share/openstack-tripleo-heat-templates/roles_data.yaml my_roles_data.yaml
Modify the default controller role and remove the OpenDaylightApi service from the controller section of the new file (line 5):
- name: Controller CountDefault: 1 ServicesDefault: - OS::TripleO::Services::TripleoFirewall - OS::TripleO::Services::OpenDaylightApi #<--Remove this - OS::TripleO::Services::OpenDaylightOvs
Create the OpenDaylight role in the new file in the OpenDaylight section:
- name: OpenDaylight CountDefault: 1 ServicesDefault: - OS::TripleO::Services::Kernel - OS::TripleO::Services::Ntp - OS::TripleO::Services::OpenDaylightApi - OS::TripleO::Services::TripleoPackages - OS::TripleO::Services::TripleoFirewall
Run the installation command using the
-r
argument (see Related information of this procedure). In this example, there are three ironic nodes in total, from which one is reserved for the custom OpenDaylight role:$ openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-opendaylight-l3.yaml -e network-environment.yaml --compute-scale 1 --ntp-server 0.se.pool.ntp.org --control-flavor control --compute-flavor compute -r my_roles_data.yaml
List the instances:
$ nova list
Verify that the new OpenDaylight role is dedicated as an instance. The following is an example output of the previous command:
+--------------------------------------+--------------------------+--------+------------+-------------+--------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+--------------------------+--------+------------+-------------+--------------------+ | 360fb1a6-b5f0-4385-b68a-ff19bcf11bc9 | overcloud-controller-0 | BUILD | spawning | NOSTATE | ctlplane=192.0.2.4 | | e38dde02-82da-4ba2-b5ad-d329a6ceaef1 | overcloud-novacompute-0 | BUILD | spawning | NOSTATE | ctlplane=192.0.2.5 | | c85ca64a-77f7-4c2c-a22e-b71d849a72e8 | overcloud-opendaylight-0 | BUILD | spawning | NOSTATE | ctlplane=192.0.2.8 | +--------------------------------------+--------------------------+--------+------------+-------------+--------------------+
More information
This argument is used to override the role definitions within Red Hat OpenStack Platform director at installation time:
-r <roles_data>.yaml
- Using a custom role requires an extra ironic node that will be used for the custom role during the installation.