Chapter 1. Introduction to RHOSO dynamic routing


Red Hat OpenStack Services on OpenShift (RHOSO) supports dynamic routing with border gateway protocol (BGP).

1.1. About RHOSO dynamic routing

Red Hat OpenStack Services on OpenShift (RHOSO) supports ML2/OVN dynamic routing with border gateway protocol (BGP) in the data plane. RHOSO dynamic routing provides a mechanism for load balancing and high availability which differs from the traditional approaches employed by most internet service providers today.

With RHOSO dynamic routing you can expose IP addresses for VMs and load balancers on provider networks during creation and startup or whenever they are associated with a floating IP address. The same functionality is available on project networks, when a special flag is set.

RHOSO dynamic routing also provides these benefits:

  • Improved management of data plane traffic.
  • Simplified configuration with fewer differences between roles.
  • Distributed L2 provider VLANs and floating IP (FIP) subnets across L3 boundaries with no requirement to span VLANs across racks for non-overlapping CIDRs only.
  • Failover of the whole subnet for public provider IPs or FIPs from one site to another.
  • Next-generation data center and hyperscale fabric support.

1.2. BGP components used in RHOSO dynamic routing

Red Hat OpenStack Services on OpenShift (RHOSO) relies on several components to provide dynamic routing to the Layer 3 data center.

Figure 1.1. RHOSO dynamic routing components

RHOSO dynamic routing components
OVN BGP agent (ovn_bgp_agent container)

A Python-based daemon that runs in the ovn_bgp_agent container on each Compute and Networker node. The agent monitors the Open Virtual Network (OVN) northbound database for certain VM and floating IP (FIP) events. When these events occur, the agent notifies the FRR BGP daemon (bgpd) to advertise the IP address or FIP associated with the VM. The agent also triggers actions that route the external traffic to the OVN overlay. Because the agent uses a multi-driver implementation, you can configure the agent for the specific infrastructure that runs on top of OVN, such as RHOSO or Red Hat OpenShift Platform.

Note

Starting with RHOSO 18.0.10 (Feature Release 3), the ovn-bgp-agent is deprecated. ovn-bgp-agent is the BGP integration component in RHOSO. This component will be removed and replaced in a future release. An alternative BGP integration mechanism will be provided in a future release. Until then, Red Hat will provide only bug fixes and support for this feature.

Free Range Routing (FRRouting, or FRR) (frr container)
An IP routing protocol suite of daemons that run in the frr container on both Computes and Networker nodes and work together to build the routing table. FRR supports equal-cost multi-path routing (ECMP), although each protocol daemon uses different methods to manage ECMP policy. The FRR suite, supplied with Red Hat Enterprise Linux (RHEL), provides several daemons. RHOSO uses the FRR bgpd, bfdd, and Zebra daemons.
BGP daemon (frr container)
A daemon (bgpd) that runs in the frr container to implement version 4 of Border Gateway Protocol (BGP). The bgpd daemon uses capability negotiation to detect the remote peer’s capabilities. If a peer is exclusively configured as an IPv4 unicast neighbor, bgpd does not send capability negotiation packets. The BGP daemon adds routes or IP addresses that causes the Zebra daemon to expose the IP addresses.
BFD daemon (frr container)
A daemon (bfdd) in the FRR suite that implements Bidirectional Forwarding Detection (BFD). This protocol provides failure detection between adjacent forwarding engines.
Zebra daemon (frr container)
A daemon that coordinates information from the various FRR daemons, and communicates routing decisions directly to the kernel routing table.
VTY shell (frr container)
A shell for FRR daemons, VTY shell (vtysh) aggregates all the CLI commands defined in each of the daemons and presents them in a single shell.

1.3. BGP advertisement and traffic redirection

In deployments that use Red Hat OpenStack Services on OpenShift (RHOSO) dynamic routing, network traffic flows to and from VMs, load balancers, and floating IPs (FIPs) using advertised routes. After the traffic arrives at the node, the OVN BGP agent adds the IP rules, routes, and OVS flow rules to redirect traffic to the OVS provider bridge (br-ex) by using Red Hat Enterprise Linux (RHEL) kernel networking.

The process of advertising a network route begins with the OVN BGP agent triggering Free Range Routing (FRRouting, or FRR) to advertise and withdraw directly connected routes. The OVN BGP agent performs these steps to properly configure FRR to ensure that IP addresses are advertised whenever they are added to the bgp-nic interface:

  1. FRR launches VTY shell to connect to the FRR socket:

    $ vtysh --vty_socket -c <command_file>
  2. VTY shell passes a file that contains the following commands:

    LEAK_VRF_TEMPLATE = '''
    router bgp {{ bgp_as }}
      address-family ipv4 unicast
        import vrf {{ vrf_name }}
      exit-address-family
    
      address-family ipv6 unicast
        import vrf {{ vrf_name }}
      exit-address-family
    
    router bgp {{ bgp_as }} vrf {{ vrf_name }}
      bgp router-id {{ bgp_router_id }}
      address-family ipv4 unicast
        redistribute connected
      exit-address-family
    
      address-family ipv6 unicast
        redistribute connected
      exit-address-family
    ...

    The commands that VTY shell pass, do the following:

    1. Create a VRF named bgp_vrf by default.
    2. Associate a dummy interface type with the VRF.

      By default, the dummy interface is named, bgp-nic.

    3. Add an IP address to the OVS provider bridges to ensure that Address Resolution Protocol (ARP) and Neighbor Discovery Protocol (NDP) is enabled.
  3. The Zebra daemon monitors IP addresses for VMs and load balancers as they are added and deleted on any local interface, and Zebra advertises and withdraws the route.

    Because FRR is configured with the redistribute connected option enabled, advertising and withdrawing routes merely consists of exposing or removing the route from the dummy interface, bgp-nic.

    Note

    Exposing VMs connected to tenant networks is disabled by default. If it is enabled in the RHOSO configuration, the OVN BGP agent exposes the neutron router gateway port. The OVN BGP agent injects traffic that flows to VMs on tenant networks into the OVN overlay through the node that hosts the chassisredirect logical router ports (CR-LRP).

  4. FRR exposes the IP address on either the node hosting the VM or the load balancer or on the node containing the OVN router gateway port.

The OVN BGP agent performs the necessary configuration to redirect traffic to and from the OVN overlay by using RHEL kernel networking and OVS, and then FRR exposes the IP address on the proper node.

When the OVN BGP agent starts, it does the following actions:

  1. Adds an IP address to the OVS provider bridges to enable ARP and NDP.
  2. Adds an entry to the routing table for each OVS provider bridge in /etc/iproute2/rt_tables.

    Note

    In the RHEL kernel the maximum number of routing tables is 231. This limits the number of provider networks to 2,147,483,648 (231).

  3. Connects a VLAN device to the bridge and enables ARP and NDP (on VLAN provider networks only).
  4. Cleans up any extra OVS flows at the OVS provider bridges.

During regular resync events or during start-up, the OVN BGP agent performs the following actions:

  1. Adds an IP address rule to apply specific routes to the routing table.

    In the following example, this rule is associated with the OVS provider bridge:

    $ ip rule
    
    0:      from all lookup local
    1000:   from all lookup [l3mdev-table]
    *32000:  from all to IP lookup br-ex*  # br-ex is the OVS provider bridge
    *32000:  from all to CIDR lookup br-ex*  # for VMs in tenant networks
    32766:  from all lookup main
    32767:  from all lookup default
  2. Adds an IP address route to the OVS provider bridge routing table to route traffic to the OVS provider bridge device:

    $ ip route show table br-ex
    
    default dev br-ex scope link
    *CIDR via CR-LRP_IP dev br-ex*  # for VMs in tenant networks
    *CR-LRP_IP dev br-ex scope link*  # for the VM in tenant network redirection
    *IP dev br-ex scope link*  # IPs on provider or FIPs
  3. Routes traffic to OVN through the OVS provider bridge (br-ex), using one of the following methods depending on whether IPv4 or IPv6 is used:

    1. For IPv4, adds a static ARP entry for the OVN router gateway ports, CR-LRP, because OVN does not reply to ARP requests outside its L2 network:

      $ ip nei
      
      ...
      CR-LRP_IP dev br-ex lladdr CR-LRP_MAC PERMANENT
      ...
    2. For IPv6, adds an NDP proxy:

      $ ip -6 nei add proxy CR-LRP_IP dev br-ex
  4. Sends the traffic from the OVN overlay to kernel networking by adding a new flow at the OVS provider bridges so that the destination MAC address changes to the MAC address of the OVS provider bridge (actions=mod_dl_dst:OVN_PROVIDER_BRIDGE_MAC,NORMAL):

    $ sudo ovs-ofctl dump-flows br-ex
    
    cookie=0x3e7, duration=77.949s, table=0, n_packets=0, n_bytes=0, priority=
    900,ip,in_port="patch-provnet-1" actions=mod_dl_dst:3a:f7:e9:54:e8:4d,NORMAL
    cookie=0x3e7, duration=77.937s, table=0, n_packets=0, n_bytes=0, priority=
    900,ipv6,in_port="patch-provnet-1" actions=mod_dl_dst:3a:f7:e9:54:e8:4d,NORMAL
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat Documentation

Legal Notice

Theme

© 2026 Red Hat
Back to top