Buscar

Este contenido no está disponible en el idioma seleccionado.

Chapter 2. Working with ML2/OVN

download PDF

Red Hat OpenStack Platform (RHOSP) networks are managed by the Networking service (neutron). The core of the Networking service is the Modular Layer 2 (ML2) plug-in, and the default mechanism driver for RHOSP ML2 plug-in is the Open Virtual Networking (OVN) mechanism driver.

Earlier RHOSP versions used the Open vSwitch (OVS) mechanism driver by default, but Red Hat recommends the ML2/OVN mechanism driver for most deployments.

If you upgrade from an RHOSP 13 ML2/OVS deployment to RHOSP 16, Red Hat recommends migrating from ML2/OVS to ML2/OVN after the upgrade. In some cases, ML2/OVN might not meet your requirements. In these cases you can deploy RHOSP with ML2/OVS.

2.1. List of components in the RHOSP OVN architecture

The RHOSP OVN architecture replaces the OVS Modular Layer 2 (ML2) mechanism driver with the OVN ML2 mechanism driver to support the Networking API. OVN provides networking services for the Red Hat OpenStack platform.

As illustrated in Figure 2.1, the OVN architecture consists of the following components and services:

ML2 plug-in with OVN mechanism driver
The ML2 plug-in translates the OpenStack-specific networking configuration into the platform-neutral OVN logical networking configuration. It typically runs on the Controller node.
OVN northbound (NB) database (ovn-nb)
This database stores the logical OVN networking configuration from the OVN ML2 plug-in. It typically runs on the Controller node and listens on TCP port 6641.
OVN northbound service (ovn-northd)
This service converts the logical networking configuration from the OVN NB database to the logical data path flows and populates these on the OVN Southbound database. It typically runs on the Controller node.
OVN southbound (SB) database (ovn-sb)
This database stores the converted logical data path flows. It typically runs on the Controller node and listens on TCP port 6642.
OVN controller (ovn-controller)
This controller connects to the OVN SB database and acts as the open vSwitch controller to control and monitor network traffic. It runs on all Compute and gateway nodes where OS::Tripleo::Services::OVNController is defined.
OVN metadata agent (ovn-metadata-agent)
This agent creates the haproxy instances for managing the OVS interfaces, network namespaces and HAProxy processes used to proxy metadata API requests. The agent runs on all Compute and gateway nodes where OS::TripleO::Services::OVNMetadataAgent is defined.
OVS database server (OVSDB)
Hosts the OVN Northbound and Southbound databases. Also interacts with ovs-vswitchd to host the OVS database conf.db.
Note

The schema file for the NB database is located in /usr/share/ovn/ovn-nb.ovsschema, and the SB database schema file is in /usr/share/ovn/ovn-sb.ovsschema.

Figure 2.1. OVN architecture in a RHOSP environment

329 OpenStack OVN Architecture 0923 1

2.2. ML2/OVN databases

In Red Hat OpenStack Platform ML2/OVN deployments, network configuration information passes between processes through shared distributed databases. You can inspect these databases to verify the status of the network and identify issues.

OVN northbound database

The northbound database (OVN_Northbound) serves as the interface between OVN and a cloud management system such as Red Hat OpenStack Platform (RHOSP). RHOSP produces the contents of the northbound database.

The northbound database contains the current desired state of the network, presented as a collection of logical ports, logical switches, logical routers, and more. Every RHOSP Networking service (neutron) object is represented in a table in the northbound database.

OVN southbound database
The southbound database (OVN_Southbound) holds the logical and physical configuration state for OVN system to support virtual network abstraction. The ovn-controller uses the information in this database to configure OVS to satisfy Networking service (neutron) requirements.

2.3. The ovn-controller service on Compute nodes

The ovn-controller service runs on each Compute node and connects to the OVN southbound (SB) database server to retrieve the logical flows. The ovn-controller translates these logical flows into physical OpenFlow flows and adds the flows to the OVS bridge (br-int). To communicate with ovs-vswitchd and install the OpenFlow flows, the ovn-controller connects to the local ovsdb-server (which hosts conf.db) using the UNIX socket path that was passed when ovn-controller was started (for example unix:/var/run/openvswitch/db.sock).

The ovn-controller service expects certain key-value pairs in the external_ids column of the Open_vSwitch table; puppet-ovn uses puppet-vswitch to populate these fields. The following example shows the key-value pairs that puppet-vswitch configures in the external_ids column:

hostname=<HOST NAME>
ovn-encap-ip=<IP OF THE NODE>
ovn-encap-type=geneve
ovn-remote=tcp:OVN_DBS_VIP:6642

2.4. OVN metadata agent on Compute nodes

The OVN metadata agent is configured in the tripleo-heat-templates/deployment/ovn/ovn-metadata-container-puppet.yaml file and included in the default Compute role through OS::TripleO::Services::OVNMetadataAgent. As such, the OVN metadata agent with default parameters is deployed as part of the OVN deployment.

OpenStack guest instances access the Networking metadata service available at the link-local IP address: 169.254.169.254. The neutron-ovn-metadata-agent has access to the host networks where the Compute metadata API exists. Each HAProxy is in a network namespace that is not able to reach the appropriate host network. HaProxy adds the necessary headers to the metadata API request and then forwards the request to the neutron-ovn-metadata-agent over a UNIX domain socket.

The OVN Networking service creates a unique network namespace for each virtual network that enables the metadata service. Each network accessed by the instances on the Compute node has a corresponding metadata namespace (ovnmeta-<network_uuid>).

2.5. The OVN composable service

Red Hat OpenStack Platform usually consists of nodes in pre-defined roles, such as nodes in Controller roles, Compute roles, and different storage role types. Each of these default roles contains a set of services that are defined in the core heat template collection.

In a default Red Hat OpenStack (RHOSP) deployment, the ML2/OVN composable service runs on Controller nodes. You can optionally create a custom Networker role and run the OVN composable service on dedicated Networker nodes.

The OVN composable service ovn-dbs is deployed in a container called ovn-dbs-bundle. In a default installation ovn-dbs is included in the Controller role and runs on Controller nodes. Because the service is composable, you can assign it to another role, such as a Networker role.

If you assign the OVN composable service to another role, ensure that the service is co-located on the same node as the pacemaker service, which controls the OVN database containers.

2.6. Layer 3 high availability with OVN

OVN supports Layer 3 high availability (L3 HA) without any special configuration.

Note

When you create a router, do not use --ha option because OVN routers are highly available by default. Openstack router create commands that include the --ha option fail.

OVN automatically schedules the router port to all available gateway nodes that can act as an L3 gateway on the specified external network. OVN L3 HA uses the gateway_chassis column in the OVN Logical_Router_Port table. Most functionality is managed by OpenFlow rules with bundled active_passive outputs. The ovn-controller handles the Address Resolution Protocol (ARP) responder and router enablement and disablement. Gratuitous ARPs for FIPs and router external addresses are also periodically sent by the ovn-controller.

Note

L3HA uses OVN to balance the routers back to the original gateway nodes to avoid any nodes becoming a bottleneck.

BFD monitoring

OVN uses the Bidirectional Forwarding Detection (BFD) protocol to monitor the availability of the gateway nodes. This protocol is encapsulated on top of the Geneve tunnels established from node to node.

Each gateway node monitors all the other gateway nodes in a star topology in the deployment. Gateway nodes also monitor the compute nodes to let the gateways enable and disable routing of packets and ARP responses and announcements.

Each compute node uses BFD to monitor each gateway node and automatically steers external traffic, such as source and destination Network Address Translation (SNAT and DNAT), through the active gateway node for a given router. Compute nodes do not need to monitor other compute nodes.

Note

External network failures are not detected as would happen with an ML2-OVS configuration.

L3 HA for OVN supports the following failure modes:

  • The gateway node becomes disconnected from the network (tunneling interface).
  • ovs-vswitchd stops (ovs-switchd is responsible for BFD signaling)
  • ovn-controller stops (ovn-controller removes itself as a registered node).
Note

This BFD monitoring mechanism only works for link failures, not for routing failures.

2.7. Feature support in OVN and OVS mechanism drivers

Review the availability of Red Hat OpenStack Platform (RHOSP) features as part of your OVS to OVN mechanism driver migration plan.

FeatureOVN RHOSP 16.2OVN RHOSP 17.1OVS RHOSP 16.2OVS RHOSP 17.1Additional information

Provisioning Baremetal Machines with OVN DHCP

No

No

Yes

Yes

The built-in DHCP server on OVN presently can not provision baremetal nodes. It cannot serve DHCP for the provisioning networks. Chainbooting iPXE requires tagging (--dhcp-match in dnsmasq), which is not supported in the OVN DHCP server. See https://bugzilla.redhat.com/show_bug.cgi?id=1622154.

North/south routing on VF(direct) ports on VLAN project (tenant networks)

No

No

Yes

Yes

Core OVN limitation. See https://bugs.launchpad.net/neutron/+bug/1875852.

Reverse DNS for internal DNS records

No

Yes

Yes

Yes

See https://bugzilla.redhat.com/show_bug.cgi?id=2211426.

Internal DNS resolution for isolated networks

No

No

Yes

Yes

OVN does not support internal DNS resolution for isolated networks because it does not allocate ports for DNS service. This does not affect OVS deployments because OVS uses dnsmasq. See https://issues.redhat.com/browse/OSP-25661.

Security group logging

Tech Preview

Yes

No

No

RHOSP does not support security group logging with the OVS mechanism driver.

Stateless security groups

No

Yes

No

No

See Configuring security groups.

Load-balancing service distributed virtual routing (DVR)

Yes

Yes

No

No

The OVS mechanism driver routes Load-balancing service traffic through Controller or Network nodes even with DVR enabled. The OVN mechanism driver routes Load-balancing service traffic directly through the Compute nodes.

IPv6 DVR

Yes

Yes

No

No

With the OVS mechanism driver, RHOSP does not distribute IPv6 traffic to the Compute nodes, even when DVR is enabled. All ingress/egress traffic goes through the centralized Controller or Network nodes. If you need IPv6 DVR, use the OVN mechanism driver.

DVR and layer 3 high availability (L3 HA)

Yes

Yes

No

No

RHOSP deployments with the OVS mechanism driver do not support DVR in conjunction with L3 HA. If you use DVR with RHOSP director, L3 HA is disabled. This means that the Networking service still schedules routers on the Network nodes and load-shares them between the L3 agents. However, if one agent fails, all routers hosted by this agent also fail. This affects only SNAT traffic. Red Hat recommends using the allow_automatic_l3agent_failover feature in such cases, so that if one Network node fails, the routers are rescheduled to a different node.

2.8. Limit for non-secure ports with ML2/OVN

Ports might become unreachable if you disable the port security plug-in extension in Red Hat Open Stack Platform (RHOSP) deployments with the default ML2/OVN mechanism driver and a large number of ports.

In some large ML2/OVN RHSOP deployments, a flow chain limit inside ML2/OVN can drop ARP requests that are targeted to ports where the security plug-in is disabled.

There is no documented maximum limit for the actual number of logical switch ports that ML2/OVN can support, but the limit approximates 4,000 ports.

Attributes that contribute to the approximated limit are the number of resubmits in the OpenFlow pipeline that ML2/OVN generates, and changes to the overall logical topology.

2.9. ML2/OVS to ML2/OVN in-place migration: validated and prohibited scenarios

Red Hat continues to test and refine in-place migration scenarios. Work with your Red Hat Technical Account Manager or Global Professional Services to determine whether your OVS deployment meets the criteria for a valid in-place migration scenario.

2.9.1. Validated ML2/OVS to ML2/OVN migration scenarios

DVR to DVR

Start: RHOSP 16.1.1 or later with OVS with DVR.

End: Same RHOSP version and release with OVN with DVR.

SR-IOV was not present in the starting environment or added during or after the migration.

Centralized routing + SR-IOV with virtual function (VF) ports only

Start: RHOSP 16.1.1 or later with OVS (no DVR)and SR-IOV.

End: Same RHOSP version and release with OVN (no DVR) and SR-IOV.

Workloads used only SR-IOV virtual function (VF) ports. SR-IOV physical function (PF) ports caused migration failure.

2.9.2. ML2/OVS to ML2/OVN in-place migration scenarios that have not been verified

You cannot perform an in-place ML2/OVS to ML2/OVN migration in the following scenarios until Red Hat announces that the underlying issues are resolved.

OVS deployment uses network functions virtualization (NFV)
Red Hat supports new deployments with ML2/OVN and NFV, but has not successfully tested migration of an ML2/OVS and NFV deployment to ML2/OVN. To track progress on this issue, see https://bugzilla.redhat.com/show_bug.cgi?id=1925290.
SR-IOV with physical function (PF) ports
Migration tests failed when any workload uses an SR-IOV PF port. To track progress on this issue, see https://bugzilla.redhat.com/show_bug.cgi?id=1879546.
OVS uses trunk ports
If your ML2/OVS deployment uses trunk ports, do not perform an ML2/OVS to ML2/OVN migration. The migration does not properly set up the trunked ports in the OVN environment. To track progress on this issue, see https://bugzilla.redhat.com/show_bug.cgi?id=1857652.
DVR with VLAN project (tenant) networks
Do not migrate to ML2/OVN with DVR and VLAN project networks. You can migrate to ML2/OVN with centralized routing. To track progress on this issue, see https://bugzilla.redhat.com/show_bug.cgi?id=1766930.

2.9.3. ML2/OVS to ML2/OVN in-place migration and security group rules

Ensure that any custom security group rules in your originating ML2/OVS deployment are compatible with the target ML2/OVN deployment.

For example, the default security group includes rules that allow egress to the DHCP server. If you deleted those rules in your ML2/OVS deployment, ML2/OVS automatically adds implicit rules that allow egress to the DHCP server. Those implicit rules are not supported by ML2/OVN, so in your target ML2/OVN environment, DHCP and metadata traffic would not reach the DHCP server and the instance would not boot. In this case, to restore DHCP access, you could add the following rules:

# Allow VM to contact dhcp server (ipv4)
   openstack security group rule create --egress --ethertype IPv4 --protocol udp --dst-port 67 ${SEC_GROUP_ID}
   # Allow VM to contact metadata server (ipv4)
   openstack security group rule create --egress --ethertype IPv4 --protocol tcp --remote-ip 169.254.169.254 ${SEC_GROUP_ID}


   # Allow VM to contact dhcp server (ipv6, non-slaac). Be aware that the remote-ip may vary depending on your use case!
   openstack security group rule create --egress --ethertype IPv6 --protocol udp --dst-port 547 --remote-ip ff02::1:2 ${SEC_GROUP_ID}
   # Allow VM to contact metadata server (ipv6)
   openstack security group rule create --egress --ethertype IPv6 --protocol tcp --remote-ip fe80::a9fe:a9fe ${SEC_GROUP_ID}

2.10. Using ML2/OVS instead of the default ML2/OVN in a new RHOSP 16.2 deployment

In Red Hat OpenStack Platform (RHOSP) 16.0 and later deployments, the Modular Layer 2 plug-in with Open Virtual Network (ML2/OVN) is the default mechanism driver for the RHOSP Networking service. You can change this setting if your application requires the ML2/OVS mechanism driver.

Procedure

  1. Log in to your undercloud as the stack user.
  2. In the template file, /home/stack/templates/containers-prepare-parameter.yaml, use ovs instead of ovn as value of the neutron_driver parameter:

    parameter_defaults:
      ContainerImagePrepare:
      - set:
          ...
          neutron_driver: ovs
  3. In the environment file, /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml, ensure that the NeutronNetworkType parameter includes vxlan or gre instead of geneve.

    Example

    parameter_defaults:
      ...
      NeutronNetworkType: 'vxlan'

  4. Run the openstack overcloud deploy command and include the core heat templates, environment files, and the files that you modified.

    Important

    The order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence.

    $ openstack overcloud deploy --templates \
    -e <your_environment_files> \
    -e /usr/share/openstack-tripleo-heat-templates/environments/services/ \
    neutron-ovs.yaml \
    -e /home/stack/templates/containers-prepare-parameter.yaml \

Additional resources

2.11. Keeping ML2/OVS after an upgrade instead of the default ML2/OVN

In Red Hat OpenStack Platform (RHOSP) 16.0 and later deployments, the Modular Layer 2 plug-in with Open Virtual Network (ML2/OVN) is the default mechanism driver for the RHOSP Networking service. If you upgrade from an earlier version of RHOSP that used ML2/OVS, you can migrate from ML2/OVN to ML2/OVS after the upgrade.

If instead you choose to keep using ML2/OVS after the upgrade, follow Red Hat’s upgrade procedure as documented, and do not perform the ML2/OVS-to-ML2/OVN migration.

2.12. Deploying a custom role with ML2/OVN

In a default Red Hat OpenStack (RHOSP) deployment, the ML2/OVN composable service runs on Controller nodes. You can optionally use supported custom roles like those described in the following examples.

Networker
Run the OVN composable services on dedicated networker nodes.
Networker with SR-IOV
Run the OVN composable services on dedicated networker nodes with SR-IOV.
Controller with SR-IOV
Run the OVN composable services on SR-IOV capable controller nodes.

You can also generate your own custom roles.

Limitations

The following limitations apply to the use of SR-IOV with ML2/OVN and native OVN DHCP in this release.

  • All external ports are scheduled on a single gateway node because there is only one HA Chassis Group for all of the ports.
  • North/south routing on VF(direct) ports on VLAN tenant networks does not work with SR-IOV because the external ports are not colocated with the logical router’s gateway ports. See https://bugs.launchpad.net/neutron/+bug/1875852.

Prerequisites

Procedure

  1. Log in to the undercloud host as the stack user and source the stackrc file.

    $ source stackrc
  2. Choose the custom roles file that is appropriate for your deployment. Use it directly in the deploy command if it suits your needs as-is. Or you can generate your own custom roles file that combines other custom roles files.

    DeploymentRoleRole File

    Networker role

    Networker

    Networker.yaml

    Networker role with SR-IOV

    NetworkerSriov

    NetworkerSriov.yaml

    Co-located control and networker with SR-IOV

    ControllerSriov

    ControllerSriov.yaml

  3. [Optional] Generate a new custom roles data file that combines one of these custom roles files with other custom roles files. Follow the instructions in Creating a roles_data file in the Advanced Overcloud Customization guide. Include the appropriate source role files depending on your deployment.
  4. [Optional] To identify specific nodes for the role, you can create a specific hardware flavor and assign the flavor to specific nodes. Then use an environment file define the flavor for the role, and to specify a node count. For more information, see the example in Creating a new role in the Advanced Overcloud Customization guide.
  5. Create an environment file as appropriate for your deployment.

    DeploymentSample Environment File

    Networker role

    neutron-ovn-dvr-ha.yaml

    Networker role with SR-IOV

    ovn-sriov.yaml

  6. Include the following settings as appropriate for your deployment.

    DeploymentSettings

    Networker role

    ControllerParameters:
        OVNCMSOptions: ""
    ControllerSriovParameters:
            OVNCMSOptions: ""
    NetworkerParameters:
        OVNCMSOptions: "enable-chassis-as-gw"
    NetworkerSriovParameters:
        OVNCMSOptions: ""

    Networker role with SR-IOV

    OS::TripleO::Services::NeutronDhcpAgent: OS::Heat::None
    
    ControllerParameters:
        OVNCMSOptions: ""
    ControllerSriovParameters:
            OVNCMSOptions: ""
    NetworkerParameters:
        OVNCMSOptions: ""
    NetworkerSriovParameters:
        OVNCMSOptions: "enable-chassis-as-gw"

    Co-located control and networker with SR-IOV

    OS::TripleO::Services::NeutronDhcpAgent: OS::Heat::None
    
    ControllerParameters:
        OVNCMSOptions: ""
    ControllerSriovParameters:
            OVNCMSOptions: "enable-chassis-as-gw"
    NetworkerParameters:
        OVNCMSOptions: ""
    NetworkerSriovParameters:
        OVNCMSOptions: ""
  7. Deploy the overcloud. Include the environment file in your deployment command with the -e option. Include the custom roles data file in your deployment command with the -r option. For example: -r Networker.yaml or -r mycustomrolesfile.yaml.

Verification steps - OVN deployments

  1. Log in to a Controller or Networker node as the overcloud SSH user, which is heat-admin by default.

    Example

    ssh heat-admin@controller-0

  2. Ensure that ovn_metadata_agent is running on Controller and Networker nodes.

    $ sudo podman ps | grep ovn_metadata

    Sample output

    a65125d9588d  undercloud-0.ctlplane.localdomain:8787/rh-osbs/rhosp16-openstack-neutron-metadata-agent-ovn:16.2_20200813.1  kolla_start           23 hours ago  Up 21 hours ago         ovn_metadata_agent

  3. Ensure that Controller nodes with OVN services or dedicated Networker nodes have been configured as gateways for OVS.

    $ sudo ovs-vsctl get Open_Vswitch . external_ids:ovn-cms-options

    Sample output

    ...
        enable-chassis-as-gw
    ...

Verification steps - SR-IOV deployments

  1. Log in to a Compute node as the overcloud SSH user, which is heat-admin by default.

    Example

    ssh heat-admin@compute-0

  2. Ensure that neutron_sriov_agent is running on the Compute nodes.

    $ sudo podman ps | grep neutron_sriov_agent

    Sample output

    f54cbbf4523a  undercloud-0.ctlplane.localdomain:8787/rh-osbs/rhosp16-openstack-neutron-sriov-agent:16.2_20200813.1
    kolla_start  23 hours ago  Up 21 hours ago         neutron_sriov_agent

  3. Ensure that network-available SR-IOV NICs have been successfully detected.

    $ sudo podman exec -uroot galera-bundle-podman-0 mysql nova -e 'select hypervisor_hostname,pci_stats from compute_nodes;'

    Sample output

    computesriov-1.localdomain	{... {"dev_type": "type-PF", "physical_network": "datacentre", "trusted": "true"}, "count": 1}, ... {"dev_type": "type-VF", "physical_network": "datacentre", "trusted": "true", "parent_ifname": "enp7s0f3"}, "count": 5}, ...}
    computesriov-0.localdomain	{... {"dev_type": "type-PF", "physical_network": "datacentre", "trusted": "true"}, "count": 1}, ... {"dev_type": "type-VF", "physical_network": "datacentre", "trusted": "true", "parent_ifname": "enp7s0f3"}, "count": 5}, ...}

Additional resources

2.13. SR-IOV with ML2/OVN and native OVN DHCP

You can deploy a custom role to use SR-IOV in an ML2/OVN deployment with native OVN DHCP. See Section 2.12, “Deploying a custom role with ML2/OVN”.

Limitations

The following limitations apply to the use of SR-IOV with ML2/OVN and native OVN DHCP in this release.

  • All external ports are scheduled on a single gateway node because there is only one HA Chassis Group for all of the ports.
  • North/south routing on VF(direct) ports on VLAN tenant networks does not work with SR-IOV because the external ports are not colocated with the logical router’s gateway ports. See https://bugs.launchpad.net/neutron/+bug/1875852.

Additional resources

Red Hat logoGithubRedditYoutubeTwitter

Aprender

Pruebe, compre y venda

Comunidades

Acerca de la documentación de Red Hat

Ayudamos a los usuarios de Red Hat a innovar y alcanzar sus objetivos con nuestros productos y servicios con contenido en el que pueden confiar.

Hacer que el código abierto sea más inclusivo

Red Hat se compromete a reemplazar el lenguaje problemático en nuestro código, documentación y propiedades web. Para más detalles, consulte el Blog de Red Hat.

Acerca de Red Hat

Ofrecemos soluciones reforzadas que facilitan a las empresas trabajar en plataformas y entornos, desde el centro de datos central hasta el perímetro de la red.

© 2024 Red Hat, Inc.