Questo contenuto non è disponibile nella lingua selezionata.
Chapter 7. Route advertisements
7.1. About route advertisements Copia collegamentoCollegamento copiato negli appunti!
To simplify network management and improve failover visibility, you can use route advertisements to share pod and egress IP routes between your cluster and the provider network. This feature requires the OVN-Kubernetes plugin and a Border Gateway Protocol (BGP) provider.
For more information, see About BGP routing.
7.1.1. Advertise cluster network routes with Border Gateway Protocol Copia collegamentoCollegamento copiato negli appunti!
To simplify routing and improve failover visibility without manual route management, you can enable route advertisements. Route advertisements allow you to advertise default and user-defined network routes, including EgressIPs, between your cluster and the provider network.
With route advertisements enabled, you can advertise network routes for the default pod network and user-defined networks to the provider network, including EgressIPs, and importing routes from the provider network to the default pod network and CUDNs. This simplifies routing while improving failover visibility, and eliminates manual route management.
From the provider network, IP addresses advertised from the default pod network and user defined networks can be reached directly and vice versa.
For example, you can import routes to the default pod network so you no longer need to manually configure routes on each node. Previously, you might have been setting the routingViaHost parameter to true and manually configuring routes on each node to approximate a similar configuration. With route advertisements you can accomplish this task seamlessly with routingViaHost parameter set to false.
You could also set the routingViaHost parameter to true in the Network custom resource CR for your cluster, but you must then manually configure routes on each node to simulate a similar configuration. When you enable route advertisements, you can set routingViaHost=false in the Network CR without having to then manually configure routes one each node.
Route reflectors on the provider network are supported and can reduce the number of BGP connections required to advertise routes on large networks.
If you use EgressIPs with route advertisements enabled, the layer 3 provider network is aware of EgressIP failovers. This means that you can locate cluster nodes that host EgressIPs on different layer 2 segments whereas before only the layer 2 provider network was aware so that required all the egress nodes to be on the same layer 2 segment.
7.1.1.1. Supported platforms Copia collegamentoCollegamento copiato negli appunti!
Advertising routes with border gateway protocol (BGP) is supported on the bare-metal infrastructure type.
7.1.1.2. Infrastructure requirements Copia collegamentoCollegamento copiato negli appunti!
To use route advertisements, you must have configured BGP for your network infrastructure. Outages or misconfigurations of your network infrastructure might cause disruptions to your cluster network.
7.1.1.3. Compatibility with other networking features Copia collegamentoCollegamento copiato negli appunti!
Route advertisements support the following OpenShift Container Platform Networking features:
- Multiple external gateways (MEG)
- MEG is not supported with this feature.
- EgressIPs
Supports the use and advertisement of EgressIPs. The node where an egress IP address resides advertises the EgressIP. An egress IP address must be on the same layer 2 network subnet as the egress node. The following limitations apply:
- Advertising EgressIPs from a user-defined network (CUDN) operating in layer 2 mode are not supported.
- Advertising EgressIPs for a network that has both egress IP addresses assigned to the primary network interface and egress IP addresses assigned to additional network interfaces is impractical. All EgressIPs are advertised on all of the BGP sessions of the selected FRRConfiguration instances, regardless of whether these sessions are established over the same interface that the EgressIP is assigned to or not, potentially leading to unwanted advertisements.
- Services
- Works with the MetalLB Operator to advertise services to the provider network.
- Egress service
- Full support.
- Egress firewall
- Full support.
- Egress QoS
- Full support.
- Network policies
- Full support.
- Direct pod ingress
- Full support for the default cluster network and cluster user-defined (CUDN) networks.
7.1.1.4. Considerations for use with the MetalLB Operator Copia collegamentoCollegamento copiato negli appunti!
The MetalLB Operator is installed as an add-on to the cluster. Deployment of the MetalLB Operator automatically enables FRR-K8s as an additional routing capability provider. This feature and the MetalLB Operator use the same FRR-K8s deployment.
7.1.1.5. Considerations for naming cluster user-defined networks (CUDNs) Copia collegamentoCollegamento copiato negli appunti!
When referencing a VRF device in a FRRConfiguration CR, the VRF name is the same as the CUDN name for VRF names that are less than or equal to 15 characters. It is recommended to use a VRF name no longer than 15 characters so that the VRF name can be inferred from the CUDN name.
7.1.1.6. BGP routing custom resources Copia collegamentoCollegamento copiato negli appunti!
The following custom resources (CRs) are used to configure route advertisements with BGP:
RouteAdvertisements-
This CR defines the advertisements for the BGP routing. From this CR, the OVN-Kubernetes controller generates a
FRRConfigurationobject that configures the FRR daemon to advertise cluster network routes. This CR is cluster scoped. FRRConfiguration-
This CR is used to define BGP peers and to configure route imports from the provider network into the cluster network. Before applying
RouteAdvertisementsobjects, at least one FRRConfiguration object must be initially defined to configure the BGP peers. This CR is namespaced.
7.1.1.7. OVN-Kubernetes controller generation of FRRConfiguration objects Copia collegamentoCollegamento copiato negli appunti!
An FRRConfiguration object is generated for each network and node selected by a RouteAdvertisements CR with the appropriate advertised prefixes that apply to each node. The OVN-Kubernetes controller checks whether the RouteAdvertisements-CR-selected nodes are a subset of the nodes that are selected by the RouteAdvertisements-CR-selected FRR configurations.
Any filtering or selection of prefixes to receive are not considered in FRRConfiguration objects that are generated from the RouteAdvertisement CRs. Configure any prefixes to receive on other FRRConfiguration objects. OVN-Kubernetes imports routes from the VRF into the appropriate network.
7.1.1.8. Cluster Network Operator configuration Copia collegamentoCollegamento copiato negli appunti!
The Cluster Network Operator (CNO) API exposes several fields to configure route advertisements:
-
spec.additionalRoutingCapabilities.providers: Specifies an additional routing provider, which is required to advertise routes. The only supported value isFRR, which enables deployment of the FRR-K8S daemon for the cluster. When enabled, the FRR-K8S daemon is deployed on all nodes. -
spec.defaultNetwork.ovnKubernetesConfig.routeAdvertisements: Enables route advertisements for the default cluster network and CUDN networks. Thespec.additionalRoutingCapabilitiesfield must be set toFRRto enable this feature.
7.1.2. RouteAdvertisements object configuration Copia collegamentoCollegamento copiato negli appunti!
To control how cluster networks and egress IP addresses are advertised to external routers, configure the cluster-scoped RouteAdvertisements object to specify networks and select the appropriate nodes and routing targets for your environment.
You can define an RouteAdvertisements object, which is cluster scoped, with the following properties.
The fields for the RouteAdvertisements custom resource (CR) are described in the following table:
| Field | Type | Description |
|---|---|---|
|
|
|
Specifies the name of the |
|
|
|
Specifies an array that can contain a list of different types of networks to advertise. Supports only the |
|
|
|
Determines which |
|
|
| Specifies which networks to advertise among default cluster network and cluster user defined networks (CUDNs). |
|
|
|
Limits the advertisements to selected nodes. When |
|
|
|
Determines which router to advertise the routes in. Routes are advertised on the routers associated with this virtual routing and forwarding (VRF) target, as specified on the selected |
7.1.3. Examples advertising pod IP addresses with BGP Copia collegamentoCollegamento copiato negli appunti!
To implement Border Gateway Protocol (BGP) for your cluster, you can use these examples to configure route advertisements for pod IP addresses and egress IP addresses. Examples include configurations for default cluster networks, user-defined networks, and VRF-lite designs.
The following examples describe several configurations for advertising pod IP addresses and EgressIPs with Border Gateway Protocol (BGP). The external network border router has the 172.18.0.5 IP address. These configures assume that you have configured an external route reflector that can relay routes to all nodes on the cluster network.
7.1.3.1. Advertising the default cluster network Copia collegamentoCollegamento copiato negli appunti!
In this scenario, the default cluster network is exposed to the external network so that pod IP addresses and EgressIPs are advertised to the provider network.
This scenario relies upon the following FRRConfiguration object:
FRRConfiguration CR
apiVersion: k8s.ovn.org/v1
kind: RouteAdvertisements
metadata:
name: default
spec:
advertisements:
- PodNetwork
- EgressIP
networkSelectors:
- networkSelectionType: DefaultNetwork
frrConfigurationSelector:
matchLabels:
routeAdvertisements: receive-all
nodeSelector: {}
When the OVN-Kubernetes controller sees this RouteAdvertisements CR, it generates further FRRConfiguration objects based on the selected ones that configure the FRR daemon to advertise the routes for the default cluster network.
An example of a FRRConfiguration CR generated by OVN-Kubernetes
apiVersion: frrk8s.metallb.io/v1beta1
kind: FRRConfiguration
metadata:
name: ovnk-generated-abcdef
namespace: openshift-frr-k8s
spec:
bgp:
routers:
- asn: 64512
neighbors:
- address: 172.18.0.5
asn: 64512
toReceive:
allowed:
mode: filtered
toAdvertise:
allowed:
prefixes:
- <default_network_host_subnet>
prefixes:
- <default_network_host_subnet>
nodeSelector:
matchLabels:
kubernetes.io/hostname: ovn-worker
In the example generated FRRConfiguration object, <default_network_host_subnet> is the subnet of the default cluster network that is advertised to the provider network.
7.1.3.2. Advertising pod IPs from a cluster user-defined network over BGP Copia collegamentoCollegamento copiato negli appunti!
In this scenario, the blue cluster user-defined network (CUDN) is exposed to the external network so that the network’s pod IP addresses and EgressIPs are advertised to the provider network.
This scenario relies upon the following FRRConfiguration object:
FRRConfiguration CR
apiVersion: frrk8s.metallb.io/v1beta1
kind: FRRConfiguration
metadata:
name: receive-all
namespace: openshift-frr-k8s
labels:
routeAdvertisements: receive-all
spec:
bgp:
routers:
- asn: 64512
neighbors:
- address: 172.18.0.5
asn: 64512
disableMP: true
toReceive:
allowed:
mode: all
With this FRRConfiguration object, routes will be imported from neighbor 172.18.0.5 into the default VRF and are available to the default cluster network.
The CUDNs are advertised over the default VRF as illustrated in the following diagram:
- Red CUDN
-
A VRF named
redassociated with a CUDN namedred -
A subnet of
10.0.0.0/24
-
A VRF named
- Blue CUDN
-
A VRF named
blueassociated with a CUDN namedblue -
A subnet of
10.0.1.0/24
-
A VRF named
In this configuration, two separate CUDNs are defined. The red network covers the 10.0.0.0/24 subnet and the blue network covers the 10.0.1.0/24 subnet. The red and blue networks are labeled as export: true.
The following RouteAdvertisements CR describes the configuration for the red and blue tenants:
RouteAdvertisements CR for the red and blue tenants
apiVersion: k8s.ovn.org/v1
kind: RouteAdvertisements
metadata:
name: advertise-cudns
spec:
advertisements:
- PodNetwork
- EgressIP
networkSelectors:
- networkSelectionType: ClusterUserDefinedNetworks
clusterUserDefinedNetworkSelector:
networkSelector:
matchLabels:
export: "true"
frrConfigurationSelector:
matchLabels:
routeAdvertisements: receive-all
nodeSelector: {}
When the OVN-Kubernetes controller sees this RouteAdvertisements CR, it generates further FRRConfiguration objects based on the selected ones that configure the FRR daemon to advertise the routes. The following example is of one such configuration object, with the number of FRRConfiguration objects created depending on the node and networks selected.
An example of a FRRConfiguration CR generated by OVN-Kubernetes
apiVersion: frrk8s.metallb.io/v1beta1
kind: FRRConfiguration
metadata:
name: ovnk-generated-abcdef
namespace: openshift-frr-k8s
spec:
bgp:
routers:
- asn: 64512
vrf: blue
imports:
- vrf: default
- asn: 64512
neighbors:
- address: 172.18.0.5
asn: 64512
toReceive:
allowed:
mode: filtered
toAdvertise:
allowed:
prefixes:
- 10.0.1.0/24
prefixes:
- 10.0.1.0/24
imports:
- vrf: blue
nodeSelector:
matchLabels:
kubernetes.io/hostname: ovn-worker
The generated FRRConfiguration object configures the subnet 10.0.1.0/24, which belongs to network blue, to be imported into the default VRF and advertised to the 172.18.0.5 neighbor. An FRRConfiguration object is generated for each network and nodes selected by a RouteAdvertisements CR with the appropriate prefixes that apply to each node.
When the targetVRF field is omitted, the routes are leaked and advertised over the default VRF. Additionally, routes that were imported to the default VRF after the definition of the initial FRRConfiguration object are also imported into the blue VRF.
7.1.3.3. Advertising pod IPs from a cluster user-defined network over BGP with VPN Copia collegamentoCollegamento copiato negli appunti!
In this scenario, a VLAN interface is attached to the VRF device associated with the blue network. This setup provides a VRF lite design, where FRR-K8S is used to advertise the blue network only over the corresponding BGP session on the blue network VRF/VLAN link to the next hop Provide Edge (PE) router. The red tenant uses the same configuration. The blue and red networks are labeled as export: true.
This scenario does not support the use of EgressIPs.
The following diagram illustrates this configuration:
- Red CUDN
-
A VRF named
redassociated with a CUDN namedred - A VLAN interface attached to the VRF device and connected to the external PE router
-
An assigned subnet of
10.0.2.0/24
-
A VRF named
- Blue CUDN
-
A VRF named
blueassociated with a CUDN namedblue - A VLAN interface attached to the VRF device and connected to the external PE router
-
An assigned subnet of
10.0.1.0/24
-
A VRF named
This approach is available only when you set routingViaHost=true in the ovnKubernetesConfig.gatewayConfig specification of the OVN-Kubernetes network plugin.
In the following configuration, an additional FRRConfiguration CR configures peering with the PE router on the blue and red VLANs:
FRRConfiguration CR manually configured for BGP VPN setup
apiVersion: frrk8s.metallb.io/v1beta1
kind: FRRConfiguration
metadata:
name: vpn-blue-red
namespace: openshift-frr-k8s
labels:
routeAdvertisements: vpn-blue-red
spec:
bgp:
routers:
- asn: 64512
vrf: blue
neighbors:
- address: 182.18.0.5
asn: 64512
toReceive:
allowed:
mode: filtered
- asn: 64512
vrf: red
neighbors:
- address: 192.18.0.5
asn: 64512
toReceive:
allowed:
mode: filtered
The following RouteAdvertisements CR describes the configuration for the blue and red tenants:
RouteAdvertisements CR for the blue and red tenants
apiVersion: k8s.ovn.org/v1
kind: RouteAdvertisements
metadata:
name: advertise-vrf-lite
spec:
targetVRF: auto
advertisements:
- "PodNetwork"
nodeSelector: {}
frrConfigurationSelector:
matchLabels:
routeAdvertisements: vpn-blue-red
networkSelectors:
- networkSelectionType: ClusterUserDefinedNetworks
clusterUserDefinedNetworkSelector:
networkSelector:
matchLabels:
export: "true"
In the RouteAdvertisements CR, the targetVRF is set to auto so that advertisements occur within the VRF device that corresponds to the individual networks that are selected. In this scenario, the pod subnet for blue is advertised over the blue VRF device, and the pod subnet for red is advertised over the red VRF device. Additionally, each BGP session imports routes to only the corresponding CUDN VRF as defined by the initial FRRConfiguration object.
When the OVN-Kubernetes controller sees this RouteAdvertisements CR, it generates further FRRConfiguration objects based on the selected ones that configure the FRR daemon to advertise the routes for the blue and red tenants.
FRRConfiguration CR generated by OVN-Kubernetes for blue and red tenants
apiVersion: frrk8s.metallb.io/v1beta1
kind: FRRConfiguration
metadata:
name: ovnk-generated-abcde
namespace: openshift-frr-k8s
spec:
bgp:
routers:
- asn: 64512
neighbors:
- address: 182.18.0.5
asn: 64512
toReceive:
allowed:
mode: filtered
toAdvertise:
allowed:
prefixes:
- 10.0.1.0/24
vrf: blue
prefixes:
- 10.0.1.0/24
- asn: 64512
neighbors:
- address: 192.18.0.5
asn: 64512
toReceive:
allowed:
mode: filtered
toAdvertise:
allowed:
prefixes:
- 10.0.2.0/24
vrf: red
prefixes:
- 10.0.2.0/24
nodeSelector:
matchLabels:
kubernetes.io/hostname: ovn-worker
In this scenario, any filtering or selection of routes to receive must be done in the FRRConfiguration CR that defines peering relationships.
7.2. Enabling route advertisements Copia collegamentoCollegamento copiato negli appunti!
To improve network reachability and failover visibility for your cluster, you can enable route advertisements for pod and egress IP addresses. This configuration requires the OVN-Kubernetes network plugin and allows your cluster to share routes with an external provider network.
As a cluster administrator, you can configure additional route advertisements for your cluster. You must use the OVN-Kubernetes network plugin.
7.2.1. Enabling route advertisements Copia collegamentoCollegamento copiato negli appunti!
To improve network reachability and failover visibility, you can enable additional routing support for your cluster. You can enable route advertisements to manage network traffic within your environment.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You are logged in to the cluster as a user with the
cluster-adminrole. - The cluster is installed on compatible infrastructure.
Procedure
To enable a routing provider and additional route advertisements, enter the following command:
$ oc patch Network.operator.openshift.io cluster --type=merge \ -p='{ "spec": { "additionalRoutingCapabilities": { "providers": ["FRR"] }, "defaultNetwork": { "ovnKubernetesConfig": { "routeAdvertisements": "Enabled" }}}}'
7.3. Disabling route advertisements Copia collegamentoCollegamento copiato negli appunti!
To stop the broadcast of cluster network routes and egress IP addresses to your provider network, you can disable route advertisements. Disabling this feature removes the automatically generated routing configurations while maintaining your existing network infrastructure.
7.3.1. Disabling route advertisements Copia collegamentoCollegamento copiato negli appunti!
To prevent your cluster from advertising additional routes to the network, you must disable the route advertisements feature in the network operator configuration. You can disable route advertisements to manage network traffic and maintain security within your environment.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You are logged in to the cluster as a user with the
cluster-adminrole. - The cluster is installed on compatible infrastructure.
Procedure
To disable additional routing support, enter the following command:
$ oc patch network.operator cluster -p '{ "spec": { "defaultNetwork": { "ovnKubernetesConfig": { "routeAdvertisements": "Disabled" } } } }'
7.4. Example route advertisements setup Copia collegamentoCollegamento copiato negli appunti!
To learn how to implement a route reflection setup on bare-metal infrastructure, you can follow this sample configuration. This example demonstrates how to enable the necessary feature gates and configure objects to advertise pod and egress IP routes.
As a cluster administrator, you can configure the following example route advertisements setup for your cluster. This configuration is intended as a sample that demonstrates how to configure route advertisements.
7.4.1. Sample route advertisements setup Copia collegamentoCollegamento copiato negli appunti!
You can implement Border Gateway Protocol (BGP) routing by using a sample configuration to set up route advertisements for your cluster. This configuration demonstrates how to configure route reflection on bare-metal infrastructure to share pod and egress IP routes.
As a cluster administrator, you can enable Border Gateway Protocol (BGP) routing support for your cluster. The configuration uses route reflection rather than a full mesh setup.
BGP routing is supported only on bare-metal infrastructure.
Prerequisites
-
You installed the OpenShift CLI (
oc). -
You are logged in to the cluster as a user with
cluster-adminprivileges. - The cluster is installed on bare-metal infrastructure.
- You have a bare-metal system with access to the cluster where you plan to run the FRR daemon container.
Procedure
Confirm that the
RouteAdvertisementsfeature gate is enabled by running the following command:$ oc get featuregate -oyaml | grep -i routeadvertisementExample output
- name: RouteAdvertisementsConfigure the Cluster Network Operator (CNO) by running the following command:
$ oc patch Network.operator.openshift.io cluster --type=merge \ -p=' {"spec":{ "additionalRoutingCapabilities": { "providers": ["FRR"]}, "defaultNetwork":{"ovnKubernetesConfig"{ "routeAdvertisements":"Enabled" }}}}'It might take a few minutes for the CNO to restart all nodes.
Get the IP addresses of the nodes by running the following command:
$ oc get node -owideExample output
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master-0 Ready control-plane,master 27h v1.31.3 192.168.111.20 <none> Red Hat Enterprise Linux CoreOS 418.94.202501062026-0 5.14.0-427.50.1.el9_4.x86_64 cri-o://1.31.4-2.rhaos4.18.git33d7598.el9 master-1 Ready control-plane,master 27h v1.31.3 192.168.111.21 <none> Red Hat Enterprise Linux CoreOS 418.94.202501062026-0 5.14.0-427.50.1.el9_4.x86_64 cri-o://1.31.4-2.rhaos4.18.git33d7598.el9 master-2 Ready control-plane,master 27h v1.31.3 192.168.111.22 <none> Red Hat Enterprise Linux CoreOS 418.94.202501062026-0 5.14.0-427.50.1.el9_4.x86_64 cri-o://1.31.4-2.rhaos4.18.git33d7598.el9 worker-0 Ready worker 27h v1.31.3 192.168.111.23 <none> Red Hat Enterprise Linux CoreOS 418.94.202501062026-0 5.14.0-427.50.1.el9_4.x86_64 cri-o://1.31.4-2.rhaos4.18.git33d7598.el9 worker-1 Ready worker 27h v1.31.3 192.168.111.24 <none> Red Hat Enterprise Linux CoreOS 418.94.202501062026-0 5.14.0-427.50.1.el9_4.x86_64 cri-o://1.31.4-2.rhaos4.18.git33d7598.el9 worker-2 Ready worker 27h v1.31.3 192.168.111.25 <none> Red Hat Enterprise Linux CoreOS 418.94.202501062026-0 5.14.0-427.50.1.el9_4.x86_64 cri-o://1.31.4-2.rhaos4.18.git33d7598.el9Get the default pod network of each node by running the following command:
$ oc get node <node_name> -o=jsonpath={.metadata.annotations.k8s\\.ovn\\.org/node-subnets}Example output
{"default":["10.129.0.0/23"],"ns1.udn-network-primary-layer3":["10.150.6.0/24"]}On the bare-metal hypervisor, get the IP address for the external FRR container to use by running the following command:
$ ip -j -d route get <a cluster node's IP> | jq -r '.[] | .dev' | xargs ip -d -j address show | jq -r '.[] | .addr_info[0].local'Create a
frr.conffile for FRR that includes each node’s IP address, as shown in the following example:Example
frr.confconfiguration filerouter bgp 64512 no bgp default ipv4-unicast no bgp default ipv6-unicast no bgp network import-check neighbor 192.168.111.20 remote-as 64512 neighbor 192.168.111.20 route-reflector-client neighbor 192.168.111.21 remote-as 64512 neighbor 192.168.111.21 route-reflector-client neighbor 192.168.111.22 remote-as 64512 neighbor 192.168.111.22 route-reflector-client neighbor 192.168.111.40 remote-as 64512 neighbor 192.168.111.40 route-reflector-client neighbor 192.168.111.47 remote-as 64512 neighbor 192.168.111.47 route-reflector-client neighbor 192.168.111.23 remote-as 64512 neighbor 192.168.111.23 route-reflector-client neighbor 192.168.111.24 remote-as 64512 neighbor 192.168.111.24 route-reflector-client neighbor 192.168.111.25 remote-as 64512 neighbor 192.168.111.25 route-reflector-client address-family ipv4 unicast network 192.168.1.0/24 network 192.169.1.1/32 exit-address-family address-family ipv4 unicast neighbor 192.168.111.20 activate neighbor 192.168.111.20 next-hop-self neighbor 192.168.111.21 activate neighbor 192.168.111.21 next-hop-self neighbor 192.168.111.22 activate neighbor 192.168.111.22 next-hop-self neighbor 192.168.111.40 activate neighbor 192.168.111.40 next-hop-self neighbor 192.168.111.47 activate neighbor 192.168.111.47 next-hop-self neighbor 192.168.111.23 activate neighbor 192.168.111.23 next-hop-self neighbor 192.168.111.24 activate neighbor 192.168.111.24 next-hop-self neighbor 192.168.111.25 activate neighbor 192.168.111.25 next-hop-self exit-address-family neighbor remote-as 64512 neighbor route-reflector-client address-family ipv6 unicast network 2001:db8::/128 exit-address-family address-family ipv6 unicast neighbor activate neighbor next-hop-self exit-address-familyCreate a file named
daemonsthat includes the following content:Example
daemonsconfiguration file# This file tells the frr package which daemons to start. # # Sample configurations for these daemons can be found in # /usr/share/doc/frr/examples/. # # ATTENTION: # # When activating a daemon for the first time, a config file, even if it is # empty, has to be present *and* be owned by the user and group "frr", else # the daemon will not be started by /etc/init.d/frr. The permissions should # be u=rw,g=r,o=. # When using "vtysh" such a config file is also needed. It should be owned by # group "frrvty" and set to ug=rw,o= though. Check /etc/pam.d/frr, too. # # The watchfrr and zebra daemons are always started. # bgpd=yes ospfd=no ospf6d=no ripd=no ripngd=no isisd=no pimd=no ldpd=no nhrpd=no eigrpd=no babeld=no sharpd=no pbrd=no bfdd=yes fabricd=no vrrpd=no # # If this option is set the /etc/init.d/frr script automatically loads # the config via "vtysh -b" when the servers are started. # Check /etc/pam.d/frr if you intend to use "vtysh"! # vtysh_enable=yes zebra_options=" -A 127.0.0.1 -s 90000000" bgpd_options=" -A 127.0.0.1" ospfd_options=" -A 127.0.0.1" ospf6d_options=" -A ::1" ripd_options=" -A 127.0.0.1" ripngd_options=" -A ::1" isisd_options=" -A 127.0.0.1" pimd_options=" -A 127.0.0.1" ldpd_options=" -A 127.0.0.1" nhrpd_options=" -A 127.0.0.1" eigrpd_options=" -A 127.0.0.1" babeld_options=" -A 127.0.0.1" sharpd_options=" -A 127.0.0.1" pbrd_options=" -A 127.0.0.1" staticd_options="-A 127.0.0.1" bfdd_options=" -A 127.0.0.1" fabricd_options="-A 127.0.0.1" vrrpd_options=" -A 127.0.0.1" # configuration profile # #frr_profile="traditional" #frr_profile="datacenter" # # This is the maximum number of FD's that will be available. # Upon startup this is read by the control files and ulimit # is called. Uncomment and use a reasonable value for your # setup if you are expecting a large number of peers in # say BGP. #MAX_FDS=1024 # The list of daemons to watch is automatically generated by the init script. #watchfrr_options="" # for debugging purposes, you can specify a "wrap" command to start instead # of starting the daemon directly, e.g. to use valgrind on ospfd: # ospfd_wrap="/usr/bin/valgrind" # or you can use "all_wrap" for all daemons, e.g. to use perf record: # all_wrap="/usr/bin/perf record --call-graph -" # the normal daemon command is added to this at the end.-
Save both the
frr.confanddaemonsfiles in the same directory, such as/tmp/frr. Create an external FRR container by running the following command:
$ sudo podman run -d --privileged --network host --rm --ulimit core=-1 --name frr --volume /tmp/frr:/etc/frr quay.io/frrouting/frr:9.1.0Create the following
FRRConfigurationandRouteAdvertisementsconfigurations:Create a
receive_all.yamlfile that includes the following content:Example
receive_all.yamlconfiguration fileapiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: receive-all namespace: openshift-frr-k8s spec: bgp: routers: - asn: 64512 neighbors: - address: 192.168.111.1 asn: 64512 toReceive: allowed: mode: allCreate a
ra.yamlfile that includes the following content:Example
ra.yamlconfiguration fileapiVersion: k8s.ovn.org/v1 kind: RouteAdvertisements metadata: name: default spec: nodeSelector: {} frrConfigurationSelector: {} networkSelectors: - networkSelectionType: DefaultNetwork advertisements: - "PodNetwork" - "EgressIP"
Apply the
receive_all.yamlandra.yamlfiles by running the following command:$ for f in receive_all.yaml ra.yaml; do oc apply -f $f; done
Verification
Verify that the configurations were applied:
Verify that the
FRRConfigurationconfigurations were created by running the following command:$ oc get frrconfiguration -AExample output
NAMESPACE NAME AGE openshift-frr-k8s ovnk-generated-6lmfb 4h47m openshift-frr-k8s ovnk-generated-bhmnm 4h47m openshift-frr-k8s ovnk-generated-d2rf5 4h47m openshift-frr-k8s ovnk-generated-f958l 4h47m openshift-frr-k8s ovnk-generated-gmsmw 4h47m openshift-frr-k8s ovnk-generated-kmnqg 4h47m openshift-frr-k8s ovnk-generated-wpvgb 4h47m openshift-frr-k8s ovnk-generated-xq7v6 4h47m openshift-frr-k8s receive-all 4h47mVerify that the
RouteAdvertisementsconfigurations were created by running the following command:$ oc get ra -AExample output
NAME STATUS default Accepted
Get the external FRR container ID by running the following command:
$ sudo podman ps | grep frrExample output
22cfc713890e quay.io/frrouting/frr:9.1.0 /usr/lib/frr/dock... 5 hours ago Up 5 hours ago frrUse the container ID that you obtained in the previous step to check the BGP neighbor and routes in the external FRR container’s
vtyshsession. Run the following command:$ sudo podman exec -it <container_id> vtysh -c "show ip bgp"Example output
BGP table version is 10, local router ID is 192.168.111.1, vrf id 0 Default local pref 100, local AS 64512 Status codes: s suppressed, d damped, h history, * valid, > best, = multipath, i internal, r RIB-failure, S Stale, R Removed Nexthop codes: @NNN nexthop's vrf id, < announce-nh-self Origin codes: i - IGP, e - EGP, ? - incomplete RPKI validation codes: V valid, I invalid, N Not found Network Next Hop Metric LocPrf Weight Path *>i10.128.0.0/23 192.168.111.22 0 100 0 i *>i10.128.2.0/23 192.168.111.23 0 100 0 i *>i10.129.0.0/23 192.168.111.20 0 100 0 i *>i10.129.2.0/23 192.168.111.24 0 100 0 i *>i10.130.0.0/23 192.168.111.21 0 100 0 i *>i10.130.2.0/23 192.168.111.40 0 100 0 i *>i10.131.0.0/23 192.168.111.25 0 100 0 i *>i10.131.2.0/23 192.168.111.47 0 100 0 i *> 192.168.1.0/24 0.0.0.0 0 32768 i *> 192.169.1.1/32 0.0.0.0 0 32768 iFind the
frr-k8spod for each cluster node by running the following command:$ oc -n openshift-frr-k8s get pod -owideExample output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES frr-k8s-86wmq 6/6 Running 0 25h 192.168.111.20 master-0 <none> <none> frr-k8s-h2wl6 6/6 Running 0 25h 192.168.111.21 master-1 <none> <none> frr-k8s-jlbgs 6/6 Running 0 25h 192.168.111.40 node1.example.com <none> <none> frr-k8s-qc6l5 6/6 Running 0 25h 192.168.111.25 worker-2 <none> <none> frr-k8s-qtxdc 6/6 Running 0 25h 192.168.111.47 node2.example.com <none> <none> frr-k8s-s5bxh 6/6 Running 0 25h 192.168.111.24 worker-1 <none> <none> frr-k8s-szgj9 6/6 Running 0 25h 192.168.111.22 master-2 <none> <none> frr-k8s-webhook-server-6cd8b8d769-kmctw 1/1 Running 0 25h 10.131.2.9 node3.example.com <none> <none> frr-k8s-zwmgh 6/6 Running 0 25h 192.168.111.23 worker-0 <none> <none>From the OpenShift Container Platform cluster, check BGP routes on the cluster node’s
frr-k8spod in the FRR container by running the following command:$ oc -n openshift-frr-k8s -c frr rsh frr-k8s-86wmqCheck the IP routes from the cluster node by running the following command:
sh-5.1# vtyshExample output
Hello, this is FRRouting (version 8.5.3). Copyright 1996-2005 Kunihiro Ishiguro, et al.Check the IP routes by running the following command:
worker-2# show ip bgpExample output
BGP table version is 10, local router ID is 192.168.111.25, vrf id 0 Default local pref 100, local AS 64512 Status codes: s suppressed, d damped, h history, * valid, > best, = multipath, i internal, r RIB-failure, S Stale, R Removed Nexthop codes: @NNN nexthop's vrf id, < announce-nh-self Origin codes: i - IGP, e - EGP, ? - incomplete RPKI validation codes: V valid, I invalid, N Not found Network Next Hop Metric LocPrf Weight Path *>i10.128.0.0/23 192.168.111.22 0 100 0 i *>i10.128.2.0/23 192.168.111.23 0 100 0 i *>i10.129.0.0/23 192.168.111.20 0 100 0 i *>i10.129.2.0/23 192.168.111.24 0 100 0 i *>i10.130.0.0/23 192.168.111.21 0 100 0 i *>i10.130.2.0/23 192.168.111.40 0 100 0 i *> 10.131.0.0/23 0.0.0.0 0 32768 i *>i10.131.2.0/23 192.168.111.47 0 100 0 i *>i192.168.1.0/24 192.168.111.1 0 100 0 i *>i192.169.1.1/32 192.168.111.1 0 100 0 i Displayed 10 routes and 10 total pathsFrom the OpenShift Container Platform cluster, debug the node by running the following command:
$ oc debug node/<node_name>Example output
Temporary namespace openshift-debug-lbtgh is created for debugging node... Starting pod/worker-2-debug-zrg4v ... To use host binaries, run `chroot /host` Pod IP: 192.168.111.25 If you don't see a command prompt, try pressing enter.Confirm that the BGP routes are being advertised by running the following command:
sh-5.1# ip route show | grep bgpExample output
10.128.0.0/23 nhid 268 via 192.168.111.22 dev br-ex proto bgp metric 20 10.128.2.0/23 nhid 259 via 192.168.111.23 dev br-ex proto bgp metric 20 10.129.0.0/23 nhid 260 via 192.168.111.20 dev br-ex proto bgp metric 20 10.129.2.0/23 nhid 261 via 192.168.111.24 dev br-ex proto bgp metric 20 10.130.0.0/23 nhid 266 via 192.168.111.21 dev br-ex proto bgp metric 20 10.130.2.0/23 nhid 262 via 192.168.111.40 dev br-ex proto bgp metric 20 10.131.2.0/23 nhid 263 via 192.168.111.47 dev br-ex proto bgp metric 20 192.168.1.0/24 nhid 264 via 192.168.111.1 dev br-ex proto bgp metric 20 192.169.1.1 nhid 264 via 192.168.111.1 dev br-ex proto bgp metric 20