Questo contenuto non è disponibile nella lingua selezionata.

Chapter 31. Load balancing with MetalLB


31.1. About MetalLB and the MetalLB Operator

As a cluster administrator, you can add the MetalLB Operator to your cluster so that when a service of type

LoadBalancer
is added to the cluster, MetalLB can add an external IP address for the service. The external IP address is added to the host network for your cluster.

31.1.1. When to use MetalLB

Using MetalLB is valuable when you have a bare-metal cluster, or an infrastructure that is like bare metal, and you want fault-tolerant access to an application through an external IP address.

You must configure your networking infrastructure to ensure that network traffic for the external IP address is routed from clients to the host network for the cluster.

After deploying MetalLB with the MetalLB Operator, when you add a service of type

LoadBalancer
, MetalLB provides a platform-native load balancer.

MetalLB operating in layer2 mode provides support for failover by utilizing a mechanism similar to IP failover. However, instead of relying on the virtual router redundancy protocol (VRRP) and keepalived, MetalLB leverages a gossip-based protocol to identify instances of node failure. When a failover is detected, another node assumes the role of the leader node, and a gratuitous ARP message is dispatched to broadcast this change.

MetalLB operating in layer3 or border gateway protocol (BGP) mode delegates failure detection to the network. The BGP router or routers that the OpenShift Container Platform nodes have established a connection with will identify any node failure and terminate the routes to that node.

Using MetalLB instead of IP failover is preferable for ensuring high availability of pods and services.

31.1.2. MetalLB Operator custom resources

The MetalLB Operator monitors its own namespace for the following custom resources:

MetalLB
When you add a MetalLB custom resource to the cluster, the MetalLB Operator deploys MetalLB on the cluster. The Operator only supports a single instance of the custom resource. If the instance is deleted, the Operator removes MetalLB from the cluster.
IPAddressPool

MetalLB requires one or more pools of IP addresses that it can assign to a service when you add a service of type

LoadBalancer
. An
IPAddressPool
includes a list of IP addresses. The list can be a single IP address that is set using a range, such as 1.1.1.1-1.1.1.1, a range specified in CIDR notation, a range specified as a starting and ending address separated by a hyphen, or a combination of the three. An
IPAddressPool
requires a name. The documentation uses names like
doc-example
,
doc-example-reserved
, and
doc-example-ipv6
. An
IPAddressPool
assigns IP addresses from the pool.
L2Advertisement
and
BGPAdvertisement
custom resources enable the advertisement of a given IP from a given pool.

Note

A single

IPAddressPool
can be referenced by a L2 advertisement and a BGP advertisement.

BGPPeer
The BGP peer custom resource identifies the BGP router for MetalLB to communicate with, the AS number of the router, the AS number for MetalLB, and customizations for route advertisement. MetalLB advertises the routes for service load-balancer IP addresses to one or more BGP peers.
BFDProfile
The BFD profile custom resource configures Bidirectional Forwarding Detection (BFD) for a BGP peer. BFD provides faster path failure detection than BGP alone provides.
L2Advertisement
The L2Advertisement custom resource advertises an IP coming from an IPAddressPool using the L2 protocol.
BGPAdvertisement
The BGPAdvertisement custom resource advertises an IP coming from an IPAddressPool using the BGP protocol.

After you add the

MetalLB
custom resource to the cluster and the Operator deploys MetalLB, the
controller
and
speaker
MetalLB software components begin running.

MetalLB validates all relevant custom resources.

31.1.3. MetalLB software components

When you install the MetalLB Operator, the

metallb-operator-controller-manager
deployment starts a pod. The pod is the implementation of the Operator. The pod monitors for changes to all the relevant resources.

When the Operator starts an instance of MetalLB, it starts a

controller
deployment and a
speaker
daemon set.

Note

You can configure deployment specifications in the MetalLB custom resource to manage how

controller
and
speaker
pods deploy and run in your cluster. For more information about these deployment specifications, see the Additional resources section.

controller

The Operator starts the deployment and a single pod. When you add a service of type

LoadBalancer
, Kubernetes uses the
controller
to allocate an IP address from an address pool. In case of a service failure, verify you have the following entry in your
controller
pod logs:

Example output

"event":"ipAllocated","ip":"172.22.0.201","msg":"IP address assigned by controller

speaker

The Operator starts a daemon set for

speaker
pods. By default, a pod is started on each node in your cluster. You can limit the pods to specific nodes by specifying a node selector in the
MetalLB
custom resource when you start MetalLB. If the
controller
allocated the IP address to the service and service is still unavailable, read the
speaker
pod logs. If the
speaker
pod is unavailable, run the
oc describe pod -n
command.

For layer 2 mode, after the

controller
allocates an IP address for the service, the
speaker
pods use an algorithm to determine which
speaker
pod on which node will announce the load balancer IP address. The algorithm involves hashing the node name and the load balancer IP address. For more information, see "MetalLB and external traffic policy". The
speaker
uses Address Resolution Protocol (ARP) to announce IPv4 addresses and Neighbor Discovery Protocol (NDP) to announce IPv6 addresses.

For Border Gateway Protocol (BGP) mode, after the

controller
allocates an IP address for the service, each
speaker
pod advertises the load balancer IP address with its BGP peers. You can configure which nodes start BGP sessions with BGP peers.

Requests for the load balancer IP address are routed to the node with the

speaker
that announces the IP address. After the node receives the packets, the service proxy routes the packets to an endpoint for the service. The endpoint can be on the same node in the optimal case, or it can be on another node. The service proxy chooses an endpoint each time a connection is established.

31.1.4. MetalLB and external traffic policy

With layer 2 mode, one node in your cluster receives all the traffic for the service IP address. With BGP mode, a router on the host network opens a connection to one of the nodes in the cluster for a new client connection. How your cluster handles the traffic after it enters the node is affected by the external traffic policy.

cluster

This is the default value for

spec.externalTrafficPolicy
.

With the

cluster
traffic policy, after the node receives the traffic, the service proxy distributes the traffic to all the pods in your service. This policy provides uniform traffic distribution across the pods, but it obscures the client IP address and it can appear to the application in your pods that the traffic originates from the node rather than the client.

local

With the

local
traffic policy, after the node receives the traffic, the service proxy only sends traffic to the pods on the same node. For example, if the
speaker
pod on node A announces the external service IP, then all traffic is sent to node A. After the traffic enters node A, the service proxy only sends traffic to pods for the service that are also on node A. Pods for the service that are on additional nodes do not receive any traffic from node A. Pods for the service on additional nodes act as replicas in case failover is needed.

This policy does not affect the client IP address. Application pods can determine the client IP address from the incoming connections.

Note

The following information is important when configuring the external traffic policy in BGP mode.

Although MetalLB advertises the load balancer IP address from all the eligible nodes, the number of nodes loadbalancing the service can be limited by the capacity of the router to establish equal-cost multipath (ECMP) routes. If the number of nodes advertising the IP is greater than the ECMP group limit of the router, the router will use less nodes than the ones advertising the IP.

For example, if the external traffic policy is set to

local
and the router has an ECMP group limit set to 16 and the pods implementing a LoadBalancer service are deployed on 30 nodes, this would result in pods deployed on 14 nodes not receiving any traffic. In this situation, it would be preferable to set the external traffic policy for the service to
cluster
.

31.1.5. MetalLB concepts for layer 2 mode

In layer 2 mode, the

speaker
pod on one node announces the external IP address for a service to the host network. From a network perspective, the node appears to have multiple IP addresses assigned to a network interface.

Note

In layer 2 mode, MetalLB relies on ARP and NDP. These protocols implement local address resolution within a specific subnet. In this context, the client must be able to reach the VIP assigned by MetalLB that exists on the same subnet as the nodes announcing the service in order for MetalLB to work.

The

speaker
pod responds to ARP requests for IPv4 services and NDP requests for IPv6.

In layer 2 mode, all traffic for a service IP address is routed through one node. After traffic enters the node, the service proxy for the CNI network provider distributes the traffic to all the pods for the service.

Because all traffic for a service enters through a single node in layer 2 mode, in a strict sense, MetalLB does not implement a load balancer for layer 2. Rather, MetalLB implements a failover mechanism for layer 2 so that when a

speaker
pod becomes unavailable, a
speaker
pod on a different node can announce the service IP address.

When a node becomes unavailable, failover is automatic. The

speaker
pods on the other nodes detect that a node is unavailable and a new
speaker
pod and node take ownership of the service IP address from the failed node.

Conceptual diagram for MetalLB and layer 2 mode

The preceding graphic shows the following concepts related to MetalLB:

  • An application is available through a service that has a cluster IP on the
    172.130.0.0/16
    subnet. That IP address is accessible from inside the cluster. The service also has an external IP address that MetalLB assigned to the service,
    192.168.100.200
    .
  • Nodes 1 and 3 have a pod for the application.
  • The
    speaker
    daemon set runs a pod on each node. The MetalLB Operator starts these pods.
  • Each
    speaker
    pod is a host-networked pod. The IP address for the pod is identical to the IP address for the node on the host network.
  • The
    speaker
    pod on node 1 uses ARP to announce the external IP address for the service,
    192.168.100.200
    . The
    speaker
    pod that announces the external IP address must be on the same node as an endpoint for the service and the endpoint must be in the
    Ready
    condition.
  • Client traffic is routed to the host network and connects to the

    192.168.100.200
    IP address. After traffic enters the node, the service proxy sends the traffic to the application pod on the same node or another node according to the external traffic policy that you set for the service.

    • If the external traffic policy for the service is set to
      cluster
      , the node that advertises the
      192.168.100.200
      load balancer IP address is selected from the nodes where a
      speaker
      pod is running. Only that node can receive traffic for the service.
    • If the external traffic policy for the service is set to
      local
      , the node that advertises the
      192.168.100.200
      load balancer IP address is selected from the nodes where a
      speaker
      pod is running and at least an endpoint of the service. Only that node can receive traffic for the service. In the preceding graphic, either node 1 or 3 would advertise
      192.168.100.200
      .
  • If node 1 becomes unavailable, the external IP address fails over to another node. On another node that has an instance of the application pod and service endpoint, the
    speaker
    pod begins to announce the external IP address,
    192.168.100.200
    and the new node receives the client traffic. In the diagram, the only candidate is node 3.

31.1.6. MetalLB concepts for BGP mode

In BGP mode, by default each

speaker
pod advertises the load balancer IP address for a service to each BGP peer. It is also possible to advertise the IPs coming from a given pool to a specific set of peers by adding an optional list of BGP peers. BGP peers are commonly network routers that are configured to use the BGP protocol. When a router receives traffic for the load balancer IP address, the router picks one of the nodes with a
speaker
pod that advertised the IP address. The router sends the traffic to that node. After traffic enters the node, the service proxy for the CNI network plugin distributes the traffic to all the pods for the service.

The directly-connected router on the same layer 2 network segment as the cluster nodes can be configured as a BGP peer. If the directly-connected router is not configured as a BGP peer, you need to configure your network so that packets for load balancer IP addresses are routed between the BGP peers and the cluster nodes that run the

speaker
pods.

Each time a router receives new traffic for the load balancer IP address, it creates a new connection to a node. Each router manufacturer has an implementation-specific algorithm for choosing which node to initiate the connection with. However, the algorithms commonly are designed to distribute traffic across the available nodes for the purpose of balancing the network load.

If a node becomes unavailable, the router initiates a new connection with another node that has a

speaker
pod that advertises the load balancer IP address.

Figure 31.1. MetalLB topology diagram for BGP mode

Speaker pods on host network 10.0.1.0/24 use BGP to advertise the load balancer IP address, 203.0.113.200, to a router.

The preceding graphic shows the following concepts related to MetalLB:

  • An application is available through a service that has an IPv4 cluster IP on the
    172.130.0.0/16
    subnet. That IP address is accessible from inside the cluster. The service also has an external IP address that MetalLB assigned to the service,
    203.0.113.200
    .
  • Nodes 2 and 3 have a pod for the application.
  • The
    speaker
    daemon set runs a pod on each node. The MetalLB Operator starts these pods. You can configure MetalLB to specify which nodes run the
    speaker
    pods.
  • Each
    speaker
    pod is a host-networked pod. The IP address for the pod is identical to the IP address for the node on the host network.
  • Each
    speaker
    pod starts a BGP session with all BGP peers and advertises the load balancer IP addresses or aggregated routes to the BGP peers. The
    speaker
    pods advertise that they are part of Autonomous System 65010. The diagram shows a router, R1, as a BGP peer within the same Autonomous System. However, you can configure MetalLB to start BGP sessions with peers that belong to other Autonomous Systems.
  • All the nodes with a

    speaker
    pod that advertises the load balancer IP address can receive traffic for the service.

    • If the external traffic policy for the service is set to
      cluster
      , all the nodes where a speaker pod is running advertise the
      203.0.113.200
      load balancer IP address and all the nodes with a
      speaker
      pod can receive traffic for the service. The host prefix is advertised to the router peer only if the external traffic policy is set to cluster.
    • If the external traffic policy for the service is set to
      local
      , then all the nodes where a
      speaker
      pod is running and at least an endpoint of the service is running can advertise the
      203.0.113.200
      load balancer IP address. Only those nodes can receive traffic for the service. In the preceding graphic, nodes 2 and 3 would advertise
      203.0.113.200
      .
  • You can configure MetalLB to control which
    speaker
    pods start BGP sessions with specific BGP peers by specifying a node selector when you add a BGP peer custom resource.
  • Any routers, such as R1, that are configured to use BGP can be set as BGP peers.
  • Client traffic is routed to one of the nodes on the host network. After traffic enters the node, the service proxy sends the traffic to the application pod on the same node or another node according to the external traffic policy that you set for the service.
  • If a node becomes unavailable, the router detects the failure and initiates a new connection with another node. You can configure MetalLB to use a Bidirectional Forwarding Detection (BFD) profile for BGP peers. BFD provides faster link failure detection so that routers can initiate new connections earlier than without BFD.

31.1.7. Limitations and restrictions

31.1.7.1. Infrastructure considerations for MetalLB

MetalLB is primarily useful for on-premise, bare metal installations because these installations do not include a native load-balancer capability. In addition to bare metal installations, installations of OpenShift Container Platform on some infrastructures might not include a native load-balancer capability. For example, the following infrastructures can benefit from adding the MetalLB Operator:

  • Bare metal
  • VMware vSphere

MetalLB Operator and MetalLB are supported with the OpenShift SDN and OVN-Kubernetes network providers.

31.1.7.2. Limitations for layer 2 mode

31.1.7.2.1. Single-node bottleneck

MetalLB routes all traffic for a service through a single node, the node can become a bottleneck and limit performance.

Layer 2 mode limits the ingress bandwidth for your service to the bandwidth of a single node. This is a fundamental limitation of using ARP and NDP to direct traffic.

31.1.7.2.2. Slow failover performance

Failover between nodes depends on cooperation from the clients. When a failover occurs, MetalLB sends gratuitous ARP packets to notify clients that the MAC address associated with the service IP has changed.

Most client operating systems handle gratuitous ARP packets correctly and update their neighbor caches promptly. When clients update their caches quickly, failover completes within a few seconds. Clients typically fail over to a new node within 10 seconds. However, some client operating systems either do not handle gratuitous ARP packets at all or have outdated implementations that delay the cache update.

Recent versions of common operating systems such as Windows, macOS, and Linux implement layer 2 failover correctly. Issues with slow failover are not expected except for older and less common client operating systems.

To minimize the impact from a planned failover on outdated clients, keep the old node running for a few minutes after flipping leadership. The old node can continue to forward traffic for outdated clients until their caches refresh.

During an unplanned failover, the service IPs are unreachable until the outdated clients refresh their cache entries.

31.1.7.2.3. Additional Network and MetalLB cannot use same network

Using the same VLAN for both MetalLB and an additional network interface set up on a source pod might result in a connection failure. This occurs when both the MetalLB IP and the source pod reside on the same node.

To avoid connection failures, place the MetalLB IP in a different subnet from the one where the source pod resides. This configuration ensures that traffic from the source pod will take the default gateway. Consequently, the traffic can effectively reach its destination by using the OVN overlay network, ensuring that the connection functions as intended.

31.1.7.3. Limitations for BGP mode

31.1.7.3.1. Node failure can break all active connections

MetalLB shares a limitation that is common to BGP-based load balancing. When a BGP session terminates, such as when a node fails or when a

speaker
pod restarts, the session termination might result in resetting all active connections. End users can experience a
Connection reset by peer
message.

The consequence of a terminated BGP session is implementation-specific for each router manufacturer. However, you can anticipate that a change in the number of

speaker
pods affects the number of BGP sessions and that active connections with BGP peers will break.

To avoid or reduce the likelihood of a service interruption, you can specify a node selector when you add a BGP peer. By limiting the number of nodes that start BGP sessions, a fault on a node that does not have a BGP session has no affect on connections to the service.

31.1.7.3.2. Support for a single ASN and a single router ID only

When you add a BGP peer custom resource, you specify the

spec.myASN
field to identify the Autonomous System Number (ASN) that MetalLB belongs to. OpenShift Container Platform uses an implementation of BGP with MetalLB that requires MetalLB to belong to a single ASN. If you attempt to add a BGP peer and specify a different value for
spec.myASN
than an existing BGP peer custom resource, you receive an error.

Similarly, when you add a BGP peer custom resource, the

spec.routerID
field is optional. If you specify a value for this field, you must specify the same value for all other BGP peer custom resources that you add.

The limitation to support a single ASN and single router ID is a difference with the community-supported implementation of MetalLB.

31.2. Installing the MetalLB Operator

As a cluster administrator, you can add the MetallB Operator so that the Operator can manage the lifecycle for an instance of MetalLB on your cluster.

MetalLB and IP failover are incompatible. If you configured IP failover for your cluster, perform the steps to remove IP failover before you install the Operator.

As a cluster administrator, you can install the MetalLB Operator by using the OpenShift Container Platform web console.

Prerequisites

  • Log in as a user with
    cluster-admin
    privileges.

Procedure

  1. In the OpenShift Container Platform web console, navigate to Operators OperatorHub.
  2. Type a keyword into the Filter by keyword box or scroll to find the Operator you want. For example, type

    metallb
    to find the MetalLB Operator.

    You can also filter options by Infrastructure Features. For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments.

  3. On the Install Operator page, accept the defaults and click Install.

Verification

  1. To confirm that the installation is successful:

    1. Navigate to the Operators Installed Operators page.
    2. Check that the Operator is installed in the
      openshift-operators
      namespace and that its status is
      Succeeded
      .
  2. If the Operator is not installed successfully, check the status of the Operator and review the logs:

    1. Navigate to the Operators Installed Operators page and inspect the
      Status
      column for any errors or failures.
    2. Navigate to the Workloads Pods page and check the logs in any pods in the
      openshift-operators
      project that are reporting issues.

31.2.2. Installing from OperatorHub by using the CLI

Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub using the CLI. You can use the OpenShift CLI (

oc
) to install the MetalLB Operator.

It is recommended that when using the CLI you install the Operator in the

metallb-system
namespace.

Prerequisites

  • A cluster installed on bare-metal hardware.
  • Install the OpenShift CLI (
    oc
    ).
  • Log in as a user with
    cluster-admin
    privileges.

Procedure

  1. Create a namespace for the MetalLB Operator by entering the following command:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Namespace
    metadata:
      name: metallb-system
    EOF
  2. Create an Operator group custom resource (CR) in the namespace:

    $ cat << EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: metallb-operator
      namespace: metallb-system
    EOF
  3. Confirm the Operator group is installed in the namespace:

    $ oc get operatorgroup -n metallb-system

    Example output

    NAME               AGE
    metallb-operator   14m

  4. Create a

    Subscription
    CR:

    1. Define the

      Subscription
      CR and save the YAML file, for example,
      metallb-sub.yaml
      :

      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: metallb-operator-sub
        namespace: metallb-system
      spec:
        channel: stable
        name: metallb-operator
        source: redhat-operators 
      1
      
        sourceNamespace: openshift-marketplace
      1
      You must specify the redhat-operators value.
    2. To create the

      Subscription
      CR, run the following command:

      $ oc create -f metallb-sub.yaml
  5. Optional: To ensure BGP and BFD metrics appear in Prometheus, you can label the namespace as in the following command:

    $ oc label ns metallb-system "openshift.io/cluster-monitoring=true"

Verification

The verification steps assume the MetalLB Operator is installed in the

metallb-system
namespace.

  1. Confirm the install plan is in the namespace:

    $ oc get installplan -n metallb-system

    Example output

    NAME            CSV                                   APPROVAL    APPROVED
    install-wzg94   metallb-operator.4.12.0-nnnnnnnnnnnn   Automatic   true

    Note

    Installation of the Operator might take a few seconds.

  2. To verify that the Operator is installed, enter the following command:

    $ oc get clusterserviceversion -n metallb-system \
      -o custom-columns=Name:.metadata.name,Phase:.status.phase

    Example output

    Name                                  Phase
    metallb-operator.4.12.0-nnnnnnnnnnnn   Succeeded

31.2.3. Starting MetalLB on your cluster

After you install the Operator, you need to configure a single instance of a MetalLB custom resource. After you configure the custom resource, the Operator starts MetalLB on your cluster.

Prerequisites

  • Install the OpenShift CLI (
    oc
    ).
  • Log in as a user with
    cluster-admin
    privileges.
  • Install the MetalLB Operator.

Procedure

This procedure assumes the MetalLB Operator is installed in the

metallb-system
namespace. If you installed using the web console substitute
openshift-operators
for the namespace.

  1. Create a single instance of a MetalLB custom resource:

    $ cat << EOF | oc apply -f -
    apiVersion: metallb.io/v1beta1
    kind: MetalLB
    metadata:
      name: metallb
      namespace: metallb-system
    EOF

Verification

Confirm that the deployment for the MetalLB controller and the daemon set for the MetalLB speaker are running.

  1. Verify that the deployment for the controller is running:

    $ oc get deployment -n metallb-system controller

    Example output

    NAME         READY   UP-TO-DATE   AVAILABLE   AGE
    controller   1/1     1            1           11m

  2. Verify that the daemon set for the speaker is running:

    $ oc get daemonset -n metallb-system speaker

    Example output

    NAME      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
    speaker   6         6         6       6            6           kubernetes.io/os=linux   18m

    The example output indicates 6 speaker pods. The number of speaker pods in your cluster might differ from the example output. Make sure the output indicates one pod for each node in your cluster.

31.2.4. Deployment specifications for MetalLB

When you start an instance of MetalLB using the

MetalLB
custom resource, you can configure deployment specifications in the
MetalLB
custom resource to manage how the
controller
or
speaker
pods deploy and run in your cluster. Use these deployment specifications to manage the following tasks:

  • Select nodes for MetalLB pod deployment.
  • Manage scheduling by using pod priority and pod affinity.
  • Assign CPU limits for MetalLB pods.
  • Assign a container RuntimeClass for MetalLB pods.
  • Assign metadata for MetalLB pods.

31.2.4.1. Limit speaker pods to specific nodes

By default, when you start MetalLB with the MetalLB Operator, the Operator starts an instance of a

speaker
pod on each node in the cluster. Only the nodes with a
speaker
pod can advertise a load balancer IP address. You can configure the
MetalLB
custom resource with a node selector to specify which nodes run the
speaker
pods.

The most common reason to limit the

speaker
pods to specific nodes is to ensure that only nodes with network interfaces on specific networks advertise load balancer IP addresses. Only the nodes with a running
speaker
pod are advertised as destinations of the load balancer IP address.

If you limit the

speaker
pods to specific nodes and specify
local
for the external traffic policy of a service, then you must ensure that the application pods for the service are deployed to the same nodes.

Example configuration to limit speaker pods to worker nodes

apiVersion: metallb.io/v1beta1
kind: MetalLB
metadata:
  name: metallb
  namespace: metallb-system
spec:
  nodeSelector:  <.>
    node-role.kubernetes.io/worker: ""
  speakerTolerations:   <.>
  - key: "Example"
    operator: "Exists"
    effect: "NoExecute"

<.> The example configuration specifies to assign the speaker pods to worker nodes, but you can specify labels that you assigned to nodes or any valid node selector. <.> In this example configuration, the pod that this toleration is attached to tolerates any taint that matches the

key
value and
effect
value using the
operator
.

After you apply a manifest with the

spec.nodeSelector
field, you can check the number of pods that the Operator deployed with the
oc get daemonset -n metallb-system speaker
command. Similarly, you can display the nodes that match your labels with a command like
oc get nodes -l node-role.kubernetes.io/worker=
.

You can optionally allow the node to control which speaker pods should, or should not, be scheduled on them by using affinity rules. You can also limit these pods by applying a list of tolerations. For more information about affinity rules, taints, and tolerations, see the additional resources.

31.2.4.2. Configuring pod priority and pod affinity in a MetalLB deployment

You can optionally assign pod priority and pod affinity rules to

controller
and
speaker
pods by configuring the
MetalLB
custom resource. The pod priority indicates the relative importance of a pod on a node and schedules the pod based on this priority. Set a high priority on your
controller
or
speaker
pod to ensure scheduling priority over other pods on the node.

Pod affinity manages relationships among pods. Assign pod affinity to the

controller
or
speaker
pods to control on what node the scheduler places the pod in the context of pod relationships. For example, you can use pod affinity rules to ensure that certain pods are located on the same node or nodes, which can help improve network communication and reduce latency between those components.

Prerequisites

  • You are logged in as a user with
    cluster-admin
    privileges.
  • You have installed the MetalLB Operator.
  • You have started the MetalLB Operator on your cluster.

Procedure

  1. Create a

    PriorityClass
    custom resource, such as
    myPriorityClass.yaml
    , to configure the priority level. This example defines a
    PriorityClass
    named
    high-priority
    with a value of
    1000000
    . Pods that are assigned this priority class are considered higher priority during scheduling compared to pods with lower priority classes:

    apiVersion: scheduling.k8s.io/v1
    kind: PriorityClass
    metadata:
      name: high-priority
    value: 1000000
  2. Apply the

    PriorityClass
    custom resource configuration:

    $ oc apply -f myPriorityClass.yaml
  3. Create a

    MetalLB
    custom resource, such as
    MetalLBPodConfig.yaml
    , to specify the
    priorityClassName
    and
    podAffinity
    values:

    apiVersion: metallb.io/v1beta1
    kind: MetalLB
    metadata:
      name: metallb
      namespace: metallb-system
    spec:
      logLevel: debug
      controllerConfig:
        priorityClassName: high-priority 
    1
    
        affinity:
          podAffinity: 
    2
    
            requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchLabels:
                 app: metallb
              topologyKey: kubernetes.io/hostname
      speakerConfig:
        priorityClassName: high-priority
        affinity:
          podAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchLabels:
                 app: metallb
              topologyKey: kubernetes.io/hostname
    1
    Specifies the priority class for the MetalLB controller pods. In this case, it is set to high-priority.
    2
    Specifies that you are configuring pod affinity rules. These rules dictate how pods are scheduled in relation to other pods or nodes. This configuration instructs the scheduler to schedule pods that have the label app: metallb onto nodes that share the same hostname. This helps to co-locate MetalLB-related pods on the same nodes, potentially optimizing network communication, latency, and resource usage between these pods.
  4. Apply the

    MetalLB
    custom resource configuration:

    $ oc apply -f MetalLBPodConfig.yaml

Verification

  • To view the priority class that you assigned to pods in the

    metallb-system
    namespace, run the following command:

    $ oc get pods -n metallb-system -o custom-columns=NAME:.metadata.name,PRIORITY:.spec.priorityClassName

    Example output

    NAME                                                 PRIORITY
    controller-584f5c8cd8-5zbvg                          high-priority
    metallb-operator-controller-manager-9c8d9985-szkqg   <none>
    metallb-operator-webhook-server-c895594d4-shjgx      <none>
    speaker-dddf7                                        high-priority

  • To verify that the scheduler placed pods according to pod affinity rules, view the metadata for the pod’s node or nodes by running the following command:

    $ oc get pod -o=custom-columns=NODE:.spec.nodeName,NAME:.metadata.name -n metallb-system

31.2.4.3. Configuring pod CPU limits in a MetalLB deployment

You can optionally assign pod CPU limits to

controller
and
speaker
pods by configuring the
MetalLB
custom resource. Defining CPU limits for the
controller
or
speaker
pods helps you to manage compute resources on the node. This ensures all pods on the node have the necessary compute resources to manage workloads and cluster housekeeping.

Prerequisites

  • You are logged in as a user with
    cluster-admin
    privileges.
  • You have installed the MetalLB Operator.

Procedure

  1. Create a

    MetalLB
    custom resource file, such as
    CPULimits.yaml
    , to specify the
    cpu
    value for the
    controller
    and
    speaker
    pods:

    apiVersion: metallb.io/v1beta1
    kind: MetalLB
    metadata:
      name: metallb
      namespace: metallb-system
    spec:
      logLevel: debug
      controllerConfig:
        resources:
          limits:
            cpu: "200m"
      speakerConfig:
        resources:
          limits:
            cpu: "300m"
  2. Apply the

    MetalLB
    custom resource configuration:

    $ oc apply -f CPULimits.yaml

Verification

  • To view compute resources for a pod, run the following command, replacing

    <pod_name>
    with your target pod:

    $ oc describe pod <pod_name>

31.2.6. Next steps

31.3. Upgrading the MetalLB

A

Subscription
custom resource (CR) that subscribes the namespace to
metallb-system
by default, automatically sets the
installPlanApproval
parameter to
Automatic
. This means that when Red Hat-provided Operator catalogs include a newer version of the MetalLB Operator, the MetalLB Operator is automatically upgraded.

If you need to manually control upgrading the MetalLB Operator, set the

installPlanApproval
parameter to
Manual
.

31.3.1. Manually upgrading the MetalLB Operator

To manually control upgrading the MetalLB Operator, you must edit the

Subscription
custom resource (CR) that subscribes the namespace to
metallb-system
. A
Subscription
CR is created as part of the Operator installation and the CR has the
installPlanApproval
parameter set to
Automatic
by default.

Prerequisites

  • You updated your cluster to the latest z-stream release.
  • You used OperatorHub to install the MetalLB Operator.
  • Access the cluster as a user with the
    cluster-admin
    role.

Procedure

  1. Get the YAML definition of the

    metallb-operator
    subscription in the
    metallb-system
    namespace by entering the following command:

    $ oc -n metallb-system get subscription metallb-operator -o yaml
  2. Edit the

    Subscription
    CR by setting the
    installPlanApproval
    parameter to
    Manual
    :

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: metallb-operator
      namespace: metallb-system
    # ...
    spec:
       channel: stable
       installPlanApproval: Manual
       name: metallb-operator
       source: redhat-operators
       sourceNamespace: openshift-marketplace
    # ...
  3. Find the latest OpenShift Container Platform 4.12 version of the MetalLB Operator by entering the following command:

    $ oc -n metallb-system get csv

    Example output

    NAME                       DISPLAY            VERSION    REPLACES    PHASE
    metallb-operator.v4.12.0   MetalLB Operator   4.12.0                 Succeeded

  4. Check the install plan that exists in the namespace by entering the following command.

    $ oc -n metallb-system get installplan

    Example output that shows install-tsz2g as a manual install plan

    NAME            CSV                                     APPROVAL    APPROVED
    install-shpmd   metallb-operator.v4.12.0-202502261233   Automatic   true
    install-tsz2g   metallb-operator.v4.12.0-202503102139   Manual      false

  5. Edit the install plan that exists in the namespace by entering the following command. Ensure that you replace

    <name_of_installplan>
    with the name of the install plan, such as
    install-tsz2g
    .

    $ oc edit installplan <name_of_installplan> -n metallb-system
    1. With the install plan open in your editor, set the

      spec.approval
      parameter to
      Manual
      and set the
      spec.approved
      parameter to
      true
      .

      Note

      After you edit the install plan, the upgrade operation starts. If you enter the

      oc -n metallb-system get csv
      command during the upgrade operation, the output might show the
      Replacing
      or the
      Pending
      status.

Verification

  1. Verify the upgrade was successful by entering the following command:

    $ oc -n metallb-system get csv

    Example output

    NAME                                        DISPLAY             VERSION              REPLACE                                 PHASE
    metallb-operator.v<latest>.0-202503102139   MetalLB Operator    {product-version}.0-202503102139  metallb-operator.v{product-version}.0-202502261233   Succeeded

31.3.2. Additional resources

31.4. Configuring MetalLB address pools

As a cluster administrator, you can add, modify, and delete address pools. The MetalLB Operator uses the address pool custom resources to set the IP addresses that MetalLB can assign to services. The namespace used in the examples assume the namespace is

metallb-system
.

31.4.1. About the IPAddressPool custom resource

Note

The address pool custom resource definition (CRD) and API documented in "Load balancing with MetalLB" in OpenShift Container Platform 4.10 can still be used in 4.12. However, the enhanced functionality associated with advertising the

IPAddressPools
with layer 2 or the BGP protocol is not supported when using the address pool CRD.

The fields for the

IPAddressPool
custom resource are described in the following table.

Expand
Table 31.1. MetalLB IPAddressPool pool custom resource
FieldTypeDescription

metadata.name

string

Specifies the name for the address pool. When you add a service, you can specify this pool name in the

metallb.universe.tf/address-pool
annotation to select an IP address from a specific pool. The names
doc-example
,
silver
, and
gold
are used throughout the documentation.

metadata.namespace

string

Specifies the namespace for the address pool. Specify the same namespace that the MetalLB Operator uses.

metadata.label

string

Optional: Specifies the key value pair assigned to the

IPAddressPool
. This can be referenced by the
ipAddressPoolSelectors
in the
BGPAdvertisement
and
L2Advertisement
CRD to associate the
IPAddressPool
with the advertisement

spec.addresses

string

Specifies a list of IP addresses for MetalLB Operator to assign to services. You can specify multiple ranges in a single pool; they will all share the same settings. Specify each range in CIDR notation or as starting and ending IP addresses separated with a hyphen.

spec.autoAssign

boolean

Optional: Specifies whether MetalLB automatically assigns IP addresses from this pool. Specify

false
if you want to explicitly request an IP address from this pool with the
metallb.universe.tf/address-pool
annotation. The default value is
true
.

Note

For IP address pool configurations, ensure the addresses field specifies only IPs that are available and not in use by other network devices, especially gateway addresses, to prevent conflicts when

autoAssign
is enabled.

spec.avoidBuggyIPs

boolean

Optional: This ensures when enabled that IP addresses ending

.0
and
.255
are not allocated from the pool. The default value is
false
. Some older consumer network equipment mistakenly block IP addresses ending in
.0
and
.255
.

31.4.2. Configuring an address pool

As a cluster administrator, you can add address pools to your cluster to control the IP addresses that MetalLB can assign to load-balancer services.

Prerequisites

  • Install the OpenShift CLI (
    oc
    ).
  • Log in as a user with
    cluster-admin
    privileges.

Procedure

  1. Create a file, such as

    ipaddresspool.yaml
    , with content like the following example:

    apiVersion: metallb.io/v1beta1
    kind: IPAddressPool
    metadata:
      namespace: metallb-system
      name: doc-example
      labels: 
    1
    
        zone: east
    spec:
      addresses:
      - 203.0.113.1-203.0.113.10
      - 203.0.113.65-203.0.113.75
    # ...
    1
    This label assigned to the IPAddressPool can be referenced by the ipAddressPoolSelectors in the BGPAdvertisement CRD to associate the IPAddressPool with the advertisement.
  2. Apply the configuration for the IP address pool:

    $ oc apply -f ipaddresspool.yaml

Verification

  1. View the address pool by entering the following command:

    $ oc describe -n metallb-system IPAddressPool doc-example

    Example output

    Name:         doc-example
    Namespace:    metallb-system
    Labels:       zone=east
    Annotations:  <none>
    API Version:  metallb.io/v1beta1
    Kind:         IPAddressPool
    Metadata:
      ...
    Spec:
      Addresses:
        203.0.113.1-203.0.113.10
        203.0.113.65-203.0.113.75
      Auto Assign:  true
    Events:         <none>

  2. Confirm that the address pool name, such as
    doc-example
    , and the IP address ranges exist in the output.

31.4.3. Example address pool configurations

The following examples show address pool configurations for specific scenarios.

31.4.3.1. Example: IPv4 and CIDR ranges

You can specify a range of IP addresses in classless inter-domain routing (CIDR) notation. You can combine CIDR notation with the notation that uses a hyphen to separate lower and upper bounds.

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: doc-example-cidr
  namespace: metallb-system
spec:
  addresses:
  - 192.168.100.0/24
  - 192.168.200.0/24
  - 192.168.255.1-192.168.255.5
# ...

31.4.3.2. Example: Assign IP addresses

You can set the

autoAssign
field to
false
to prevent MetalLB from automatically assigning IP addresses from the address pool. You can then assign a single IP address or multiple IP addresses from an IP address pool. To assign an IP address, append the
/32
CIDR notation to the target IP address in the
spec.addresses
parameter. This setting ensures that only the specific IP address is avilable for assignment, leaving non-reserved IP addresses for application use.

Example IPAddressPool CR that assigns multiple IP addresses

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: doc-example-reserved
  namespace: metallb-system
spec:
  addresses:
  - 192.168.100.1/32
  - 192.168.200.1/32
  autoAssign: false
# ...

Note

When you add a service, you can request a specific IP address from the address pool or you can specify the pool name in an annotation to request any IP address from the pool.

31.4.3.3. Example: IPv4 and IPv6 addresses

You can add address pools that use IPv4 and IPv6. You can specify multiple ranges in the

addresses
list, just like several IPv4 examples.

Whether the service is assigned a single IPv4 address, a single IPv6 address, or both is determined by how you add the service. The

spec.ipFamilies
and
spec.ipFamilyPolicy
fields control how IP addresses are assigned to the service.

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: doc-example-combined
  namespace: metallb-system
spec:
  addresses:
  - 10.0.100.0/28 
1

  - 2002:2:2::1-2002:2:2::100
# ...

31.4.4. Next steps

31.5. About advertising for the IP address pools

You can configure MetalLB so that the IP address is advertised with layer 2 protocols, the BGP protocol, or both. With layer 2, MetalLB provides a fault-tolerant external IP address. With BGP, MetalLB provides fault-tolerance for the external IP address and load balancing.

MetalLB supports advertising using L2 and BGP for the same set of IP addresses.

MetalLB provides the flexibility to assign address pools to specific BGP peers effectively to a subset of nodes on the network. This allows for more complex configurations, for example facilitating the isolation of nodes or the segmentation of the network.

31.5.1. About the BGPAdvertisement custom resource

The fields for the

BGPAdvertisements
object are defined in the following table:

Expand
Table 31.2. BGPAdvertisements configuration
FieldTypeDescription

metadata.name

string

Specifies the name for the BGP advertisement.

metadata.namespace

string

Specifies the namespace for the BGP advertisement. Specify the same namespace that the MetalLB Operator uses.

spec.aggregationLength

integer

Optional: Specifies the number of bits to include in a 32-bit CIDR mask. To aggregate the routes that the speaker advertises to BGP peers, the mask is applied to the routes for several service IP addresses and the speaker advertises the aggregated route. For example, with an aggregation length of

24
, the speaker can aggregate several
10.0.1.x/32
service IP addresses and advertise a single
10.0.1.0/24
route.

spec.aggregationLengthV6

integer

Optional: Specifies the number of bits to include in a 128-bit CIDR mask. For example, with an aggregation length of

124
, the speaker can aggregate several
fc00:f853:0ccd:e799::x/128
service IP addresses and advertise a single
fc00:f853:0ccd:e799::0/124
route.

spec.communities

string

Optional: Specifies one or more BGP communities. Each community is specified as two 16-bit values separated by the colon character. Well-known communities must be specified as 16-bit values:

  • NO_EXPORT
    :
    65535:65281
  • NO_ADVERTISE
    :
    65535:65282
  • NO_EXPORT_SUBCONFED
    :
    65535:65283

    Note

    You can also use community objects that are created along with the strings.

spec.localPref

integer

Optional: Specifies the local preference for this advertisement. This BGP attribute applies to BGP sessions within the Autonomous System.

spec.ipAddressPools

string

Optional: The list of

IPAddressPools
to advertise with this advertisement, selected by name.

spec.ipAddressPoolSelectors

string

Optional: A selector for the

IPAddressPools
that gets advertised with this advertisement. This is for associating the
IPAddressPool
to the advertisement based on the label assigned to the
IPAddressPool
instead of the name itself. If no
IPAddressPool
is selected by this or by the list, the advertisement is applied to all the
IPAddressPools
.

spec.nodeSelectors

string

Optional:

NodeSelectors
allows to limit the nodes to announce as next hops for the load balancer IP. When empty, all the nodes are announced as next hops.

spec.peers

string

Optional: Use a list to specify the

metadata.name
values for each
BGPPeer
resource that receives advertisements for the MetalLB service IP address. The MetalLB service IP address is assigned from the IP address pool. By default, the MetalLB service IP address is advertised to all configured
BGPPeer
resources. Use this field to limit the advertisement to specific
BGPpeer
resources.

31.5.2. Configuring MetalLB with a BGP advertisement and a basic use case

Configure MetalLB as follows so that the peer BGP routers receive one

203.0.113.200/32
route and one
fc00:f853:ccd:e799::1/128
route for each load-balancer IP address that MetalLB assigns to a service. Because the
localPref
and
communities
fields are not specified, the routes are advertised with
localPref
set to zero and no BGP communities.

31.5.2.1. Example: Advertise a basic address pool configuration with BGP

Configure MetalLB as follows so that the

IPAddressPool
is advertised with the BGP protocol.

Prerequisites

  • Install the OpenShift CLI (
    oc
    ).
  • Log in as a user with
    cluster-admin
    privileges.

Procedure

  1. Create an IP address pool.

    1. Create a file, such as

      ipaddresspool.yaml
      , with content like the following example:

      apiVersion: metallb.io/v1beta1
      kind: IPAddressPool
      metadata:
        namespace: metallb-system
        name: doc-example-bgp-basic
      spec:
        addresses:
          - 203.0.113.200/30
          - fc00:f853:ccd:e799::/124
    2. Apply the configuration for the IP address pool:

      $ oc apply -f ipaddresspool.yaml
  2. Create a BGP advertisement.

    1. Create a file, such as

      bgpadvertisement.yaml
      , with content like the following example:

      apiVersion: metallb.io/v1beta1
      kind: BGPAdvertisement
      metadata:
        name: bgpadvertisement-basic
        namespace: metallb-system
      spec:
        ipAddressPools:
        - doc-example-bgp-basic
    2. Apply the configuration:

      $ oc apply -f bgpadvertisement.yaml

31.5.3. Configuring MetalLB with a BGP advertisement and an advanced use case

Configure MetalLB as follows so that MetalLB assigns IP addresses to load-balancer services in the ranges between

203.0.113.200
and
203.0.113.203
and between
fc00:f853:ccd:e799::0
and
fc00:f853:ccd:e799::f
.

To explain the two BGP advertisements, consider an instance when MetalLB assigns the IP address of

203.0.113.200
to a service. With that IP address as an example, the speaker advertises two routes to BGP peers:

  • 203.0.113.200/32
    , with
    localPref
    set to
    100
    and the community set to the numeric value of the
    NO_ADVERTISE
    community. This specification indicates to the peer routers that they can use this route but they should not propagate information about this route to BGP peers.
  • 203.0.113.200/30
    , aggregates the load-balancer IP addresses assigned by MetalLB into a single route. MetalLB advertises the aggregated route to BGP peers with the community attribute set to
    8000:800
    . BGP peers propagate the
    203.0.113.200/30
    route to other BGP peers. When traffic is routed to a node with a speaker, the
    203.0.113.200/32
    route is used to forward the traffic into the cluster and to a pod that is associated with the service.

As you add more services and MetalLB assigns more load-balancer IP addresses from the pool, peer routers receive one local route,

203.0.113.20x/32
, for each service, as well as the
203.0.113.200/30
aggregate route. Each service that you add generates the
/30
route, but MetalLB deduplicates the routes to one BGP advertisement before communicating with peer routers.

31.5.3.1. Example: Advertise an advanced address pool configuration with BGP

Configure MetalLB as follows so that the

IPAddressPool
is advertised with the BGP protocol.

Prerequisites

  • Install the OpenShift CLI (
    oc
    ).
  • Log in as a user with
    cluster-admin
    privileges.

Procedure

  1. Create an IP address pool.

    1. Create a file, such as

      ipaddresspool.yaml
      , with content like the following example:

      apiVersion: metallb.io/v1beta1
      kind: IPAddressPool
      metadata:
        namespace: metallb-system
        name: doc-example-bgp-adv
        labels:
          zone: east
      spec:
        addresses:
          - 203.0.113.200/30
          - fc00:f853:ccd:e799::/124
        autoAssign: false
    2. Apply the configuration for the IP address pool:

      $ oc apply -f ipaddresspool.yaml
  2. Create a BGP advertisement.

    1. Create a file, such as

      bgpadvertisement1.yaml
      , with content like the following example:

      apiVersion: metallb.io/v1beta1
      kind: BGPAdvertisement
      metadata:
        name: bgpadvertisement-adv-1
        namespace: metallb-system
      spec:
        ipAddressPools:
          - doc-example-bgp-adv
        communities:
          - 65535:65282
        aggregationLength: 32
        localPref: 100
    2. Apply the configuration:

      $ oc apply -f bgpadvertisement1.yaml
    3. Create a file, such as

      bgpadvertisement2.yaml
      , with content like the following example:

      apiVersion: metallb.io/v1beta1
      kind: BGPAdvertisement
      metadata:
        name: bgpadvertisement-adv-2
        namespace: metallb-system
      spec:
        ipAddressPools:
          - doc-example-bgp-adv
        communities:
          - 8000:800
        aggregationLength: 30
        aggregationLengthV6: 124
    4. Apply the configuration:

      $ oc apply -f bgpadvertisement2.yaml

31.5.4. Advertising an IP address pool from a subset of nodes

To advertise an IP address from an IP addresses pool, from a specific set of nodes only, use the

.spec.nodeSelector
specification in the BGPAdvertisement custom resource. This specification associates a pool of IP addresses with a set of nodes in the cluster. This is useful when you have nodes on different subnets in a cluster and you want to advertise an IP addresses from an address pool from a specific subnet, for example a public-facing subnet only.

Prerequisites

  • Install the OpenShift CLI (
    oc
    ).
  • Log in as a user with
    cluster-admin
    privileges.

Procedure

  1. Create an IP address pool by using a custom resource:

    apiVersion: metallb.io/v1beta1
    kind: IPAddressPool
    metadata:
      namespace: metallb-system
      name: pool1
    spec:
      addresses:
        - 4.4.4.100-4.4.4.200
        - 2001:100:4::200-2001:100:4::400
  2. Control which nodes in the cluster the IP address from

    pool1
    advertises from by defining the
    .spec.nodeSelector
    value in the BGPAdvertisement custom resource:

    apiVersion: metallb.io/v1beta1
    kind: BGPAdvertisement
    metadata:
      name: example
    spec:
      ipAddressPools:
      - pool1
      nodeSelector:
      - matchLabels:
          kubernetes.io/hostname: NodeA
      - matchLabels:
          kubernetes.io/hostname: NodeB

In this example, the IP address from

pool1
advertises from
NodeA
and
NodeB
only.

31.5.5. About the L2Advertisement custom resource

The fields for the

l2Advertisements
object are defined in the following table:

Expand
Table 31.3. L2 advertisements configuration
FieldTypeDescription

metadata.name

string

Specifies the name for the L2 advertisement.

metadata.namespace

string

Specifies the namespace for the L2 advertisement. Specify the same namespace that the MetalLB Operator uses.

spec.ipAddressPools

string

Optional: The list of

IPAddressPools
to advertise with this advertisement, selected by name.

spec.ipAddressPoolSelectors

string

Optional: A selector for the

IPAddressPools
that gets advertised with this advertisement. This is for associating the
IPAddressPool
to the advertisement based on the label assigned to the
IPAddressPool
instead of the name itself. If no
IPAddressPool
is selected by this or by the list, the advertisement is applied to all the
IPAddressPools
.

spec.nodeSelectors

string

Optional:

NodeSelectors
limits the nodes to announce as next hops for the load balancer IP. When empty, all the nodes are announced as next hops.

Important

Limiting the nodes to announce as next hops is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

spec.interfaces

string

Optional: The list of

interfaces
that are used to announce the load balancer IP.

31.5.6. Configuring MetalLB with an L2 advertisement

Configure MetalLB as follows so that the

IPAddressPool
is advertised with the L2 protocol.

Prerequisites

  • Install the OpenShift CLI (
    oc
    ).
  • Log in as a user with
    cluster-admin
    privileges.

Procedure

  1. Create an IP address pool.

    1. Create a file, such as

      ipaddresspool.yaml
      , with content like the following example:

      apiVersion: metallb.io/v1beta1
      kind: IPAddressPool
      metadata:
        namespace: metallb-system
        name: doc-example-l2
      spec:
        addresses:
          - 4.4.4.0/24
        autoAssign: false
    2. Apply the configuration for the IP address pool:

      $ oc apply -f ipaddresspool.yaml
  2. Create a L2 advertisement.

    1. Create a file, such as

      l2advertisement.yaml
      , with content like the following example:

      apiVersion: metallb.io/v1beta1
      kind: L2Advertisement
      metadata:
        name: l2advertisement
        namespace: metallb-system
      spec:
        ipAddressPools:
         - doc-example-l2
    2. Apply the configuration:

      $ oc apply -f l2advertisement.yaml

31.5.7. Configuring MetalLB with a L2 advertisement and label

The

ipAddressPoolSelectors
field in the
BGPAdvertisement
and
L2Advertisement
custom resource definitions is used to associate the
IPAddressPool
to the advertisement based on the label assigned to the
IPAddressPool
instead of the name itself.

This example shows how to configure MetalLB so that the

IPAddressPool
is advertised with the L2 protocol by configuring the
ipAddressPoolSelectors
field.

Prerequisites

  • Install the OpenShift CLI (
    oc
    ).
  • Log in as a user with
    cluster-admin
    privileges.

Procedure

  1. Create an IP address pool.

    1. Create a file, such as

      ipaddresspool.yaml
      , with content like the following example:

      apiVersion: metallb.io/v1beta1
      kind: IPAddressPool
      metadata:
        namespace: metallb-system
        name: doc-example-l2-label
        labels:
          zone: east
      spec:
        addresses:
          - 172.31.249.87/32
    2. Apply the configuration for the IP address pool:

      $ oc apply -f ipaddresspool.yaml
  2. Create a L2 advertisement advertising the IP using

    ipAddressPoolSelectors
    .

    1. Create a file, such as

      l2advertisement.yaml
      , with content like the following example:

      apiVersion: metallb.io/v1beta1
      kind: L2Advertisement
      metadata:
        name: l2advertisement-label
        namespace: metallb-system
      spec:
        ipAddressPoolSelectors:
          - matchExpressions:
              - key: zone
                operator: In
                values:
                  - east
    2. Apply the configuration:

      $ oc apply -f l2advertisement.yaml

31.5.8. Configuring MetalLB with an L2 advertisement for selected interfaces

By default, the IP address pool IP addresses that are assigned to the service are advertised from all the network interfaces. You can use the

interfaces
field in the
L2Advertisement
custom resource definition to restrict the network interfaces that advertise the addresses from a given IP address pool.

This example shows how to configure MetalLB so that the IP address pool is advertised only from the network interfaces listed in the

interfaces
field of all nodes.

Prerequisites

  • You have installed the OpenShift CLI (
    oc
    ).
  • You are logged in as a user with
    cluster-admin
    privileges.

Procedure

  1. Create an IP address pool:

    1. Create a file, such as

      ipaddresspool.yaml
      , and enter the configuration details like the following example:

      apiVersion: metallb.io/v1beta1
      kind: IPAddressPool
      metadata:
        namespace: metallb-system
        name: doc-example-l2
      spec:
        addresses:
          - 4.4.4.0/24
        autoAssign: false
    2. Apply the configuration for the IP address pool like the following example:

      $ oc apply -f ipaddresspool.yaml
  2. Create a L2 advertisement advertising the IP with

    interfaces
    selector.

    1. Create a YAML file, such as

      l2advertisement.yaml
      , and enter the configuration details like the following example:

      apiVersion: metallb.io/v1beta1
      kind: L2Advertisement
      metadata:
        name: l2advertisement
        namespace: metallb-system
      spec:
        ipAddressPools:
         - doc-example-l2
         interfaces:
         - interfaceA
         - interfaceB
    2. Apply the configuration for the advertisement like the following example:

      $ oc apply -f l2advertisement.yaml
Important

The interface selector does not affect how MetalLB chooses the node to announce a given IP by using L2. The chosen node does not announce the service if the node does not have the selected interface.

31.6. Configuring MetalLB BGP peers

As a cluster administrator, you can add, modify, and delete Border Gateway Protocol (BGP) peers. The MetalLB Operator uses the BGP peer custom resources to identify which peers that MetalLB

speaker
pods contact to start BGP sessions. The peers receive the route advertisements for the load-balancer IP addresses that MetalLB assigns to services.

31.6.1. About the BGP peer custom resource

The fields for the BGP peer custom resource are described in the following table.

Expand
Table 31.4. MetalLB BGP peer custom resource
FieldTypeDescription

metadata.name

string

Specifies the name for the BGP peer custom resource.

metadata.namespace

string

Specifies the namespace for the BGP peer custom resource.

spec.myASN

integer

Specifies the Autonomous System number for the local end of the BGP session. Specify the same value in all BGP peer custom resources that you add. The range is

0
to
4294967295
.

spec.peerASN

integer

Specifies the Autonomous System number for the remote end of the BGP session. The range is

0
to
4294967295
.

spec.peerAddress

string

Specifies the IP address of the peer to contact for establishing the BGP session.

spec.sourceAddress

string

Optional: Specifies the IP address to use when establishing the BGP session. The value must be an IPv4 address.

spec.peerPort

integer

Optional: Specifies the network port of the peer to contact for establishing the BGP session. The range is

0
to
16384
.

spec.holdTime

string

Optional: Specifies the duration for the hold time to propose to the BGP peer. The minimum value is 3 seconds (

3s
). The common units are seconds and minutes, such as
3s
,
1m
, and
5m30s
. To detect path failures more quickly, also configure BFD.

spec.keepaliveTime

string

Optional: Specifies the maximum interval between sending keep-alive messages to the BGP peer. If you specify this field, you must also specify a value for the

holdTime
field. The specified value must be less than the value for the
holdTime
field.

spec.routerID

string

Optional: Specifies the router ID to advertise to the BGP peer. If you specify this field, you must specify the same value in every BGP peer custom resource that you add.

spec.password

string

Optional: Specifies the MD5 password to send to the peer for routers that enforce TCP MD5 authenticated BGP sessions.

spec.passwordSecret

string

Optional: Specifies name of the authentication secret for the BGP Peer. The secret must live in the

metallb
namespace and be of type basic-auth.

spec.bfdProfile

string

Optional: Specifies the name of a BFD profile.

spec.nodeSelectors

object[]

Optional: Specifies a selector, using match expressions and match labels, to control which nodes can connect to the BGP peer.

spec.ebgpMultiHop

boolean

Optional: Specifies that the BGP peer is multiple network hops away. If the BGP peer is not directly connected to the same network, the speaker cannot establish a BGP session unless this field is set to

true
. This field applies to external BGP. External BGP is the term that is used to describe when a BGP peer belongs to a different Autonomous System.

Note

The

passwordSecret
field is mutually exclusive with the
password
field, and contains a reference to a secret containing the password to use. Setting both fields results in a failure of the parsing.

31.6.2. Configuring a BGP peer

As a cluster administrator, you can add a BGP peer custom resource to exchange routing information with network routers and advertise the IP addresses for services.

Prerequisites

  • Install the OpenShift CLI (
    oc
    ).
  • Log in as a user with
    cluster-admin
    privileges.
  • Configure MetalLB with a BGP advertisement.

Procedure

  1. Create a file, such as

    bgppeer.yaml
    , with content like the following example:

    apiVersion: metallb.io/v1beta2
    kind: BGPPeer
    metadata:
      namespace: metallb-system
      name: doc-example-peer
    spec:
      peerAddress: 10.0.0.1
      peerASN: 64501
      myASN: 64500
      routerID: 10.10.10.10
  2. Apply the configuration for the BGP peer:

    $ oc apply -f bgppeer.yaml

31.6.3. Configure a specific set of BGP peers for a given address pool

This procedure illustrates how to:

  • Configure a set of address pools (
    pool1
    and
    pool2
    ).
  • Configure a set of BGP peers (
    peer1
    and
    peer2
    ).
  • Configure BGP advertisement to assign
    pool1
    to
    peer1
    and
    pool2
    to
    peer2
    .

Prerequisites

  • Install the OpenShift CLI (
    oc
    ).
  • Log in as a user with
    cluster-admin
    privileges.

Procedure

  1. Create address pool

    pool1
    .

    1. Create a file, such as

      ipaddresspool1.yaml
      , with content like the following example:

      apiVersion: metallb.io/v1beta1
      kind: IPAddressPool
      metadata:
        namespace: metallb-system
        name: pool1
      spec:
        addresses:
          - 4.4.4.100-4.4.4.200
          - 2001:100:4::200-2001:100:4::400
    2. Apply the configuration for the IP address pool

      pool1
      :

      $ oc apply -f ipaddresspool1.yaml
  2. Create address pool

    pool2
    .

    1. Create a file, such as

      ipaddresspool2.yaml
      , with content like the following example:

      apiVersion: metallb.io/v1beta1
      kind: IPAddressPool
      metadata:
        namespace: metallb-system
        name: pool2
      spec:
        addresses:
          - 5.5.5.100-5.5.5.200
          - 2001:100:5::200-2001:100:5::400
    2. Apply the configuration for the IP address pool

      pool2
      :

      $ oc apply -f ipaddresspool2.yaml
  3. Create BGP

    peer1
    .

    1. Create a file, such as

      bgppeer1.yaml
      , with content like the following example:

      apiVersion: metallb.io/v1beta2
      kind: BGPPeer
      metadata:
        namespace: metallb-system
        name: peer1
      spec:
        peerAddress: 10.0.0.1
        peerASN: 64501
        myASN: 64500
        routerID: 10.10.10.10
    2. Apply the configuration for the BGP peer:

      $ oc apply -f bgppeer1.yaml
  4. Create BGP

    peer2
    .

    1. Create a file, such as

      bgppeer2.yaml
      , with content like the following example:

      apiVersion: metallb.io/v1beta2
      kind: BGPPeer
      metadata:
        namespace: metallb-system
        name: peer2
      spec:
        peerAddress: 10.0.0.2
        peerASN: 64501
        myASN: 64500
        routerID: 10.10.10.10
    2. Apply the configuration for the BGP peer2:

      $ oc apply -f bgppeer2.yaml
  5. Create BGP advertisement 1.

    1. Create a file, such as

      bgpadvertisement1.yaml
      , with content like the following example:

      apiVersion: metallb.io/v1beta1
      kind: BGPAdvertisement
      metadata:
        name: bgpadvertisement-1
        namespace: metallb-system
      spec:
        ipAddressPools:
          - pool1
        peers:
          - peer1
        communities:
          - 65535:65282
        aggregationLength: 32
        aggregationLengthV6: 128
        localPref: 100
    2. Apply the configuration:

      $ oc apply -f bgpadvertisement1.yaml
  6. Create BGP advertisement 2.

    1. Create a file, such as

      bgpadvertisement2.yaml
      , with content like the following example:

      apiVersion: metallb.io/v1beta1
      kind: BGPAdvertisement
      metadata:
        name: bgpadvertisement-2
        namespace: metallb-system
      spec:
        ipAddressPools:
          - pool2
        peers:
          - peer2
        communities:
          - 65535:65282
        aggregationLength: 32
        aggregationLengthV6: 128
        localPref: 100
    2. Apply the configuration:

      $ oc apply -f bgpadvertisement2.yaml

31.6.4. Example BGP peer configurations

31.6.4.1. Example: Limit which nodes connect to a BGP peer

You can specify the node selectors field to control which nodes can connect to a BGP peer.

apiVersion: metallb.io/v1beta2
kind: BGPPeer
metadata:
  name: doc-example-nodesel
  namespace: metallb-system
spec:
  peerAddress: 10.0.20.1
  peerASN: 64501
  myASN: 64500
  nodeSelectors:
  - matchExpressions:
    - key: kubernetes.io/hostname
      operator: In
      values: [compute-1.example.com, compute-2.example.com]

31.6.4.2. Example: Specify a BFD profile for a BGP peer

You can specify a BFD profile to associate with BGP peers. BFD compliments BGP by providing more rapid detection of communication failures between peers than BGP alone.

apiVersion: metallb.io/v1beta2
kind: BGPPeer
metadata:
  name: doc-example-peer-bfd
  namespace: metallb-system
spec:
  peerAddress: 10.0.20.1
  peerASN: 64501
  myASN: 64500
  holdTime: "10s"
  bfdProfile: doc-example-bfd-profile-full
Note

Deleting the bidirectional forwarding detection (BFD) profile and removing the

bfdProfile
added to the border gateway protocol (BGP) peer resource does not disable the BFD. Instead, the BGP peer starts using the default BFD profile. To disable BFD from a BGP peer resource, delete the BGP peer configuration and recreate it without a BFD profile. For more information, see BZ#2050824.

31.6.4.3. Example: Specify BGP peers for dual-stack networking

To support dual-stack networking, add one BGP peer custom resource for IPv4 and one BGP peer custom resource for IPv6.

apiVersion: metallb.io/v1beta2
kind: BGPPeer
metadata:
  name: doc-example-dual-stack-ipv4
  namespace: metallb-system
spec:
  peerAddress: 10.0.20.1
  peerASN: 64500
  myASN: 64500
---
apiVersion: metallb.io/v1beta2
kind: BGPPeer
metadata:
  name: doc-example-dual-stack-ipv6
  namespace: metallb-system
spec:
  peerAddress: 2620:52:0:88::104
  peerASN: 64500
  myASN: 64500

31.6.5. Next steps

31.7. Configuring community alias

As a cluster administrator, you can configure a community alias and use it across different advertisements.

31.7.1. About the community custom resource

The

community
custom resource is a collection of aliases for communities. Users can define named aliases to be used when advertising
ipAddressPools
using the
BGPAdvertisement
. The fields for the
community
custom resource are described in the following table.

Note

The

community
CRD applies only to BGPAdvertisement.

Expand
Table 31.5. MetalLB community custom resource
FieldTypeDescription

metadata.name

string

Specifies the name for the

community
.

metadata.namespace

string

Specifies the namespace for the

community
. Specify the same namespace that the MetalLB Operator uses.

spec.communities

string

Specifies a list of BGP community aliases that can be used in BGPAdvertisements. A community alias consists of a pair of name (alias) and value (number:number). Link the BGPAdvertisement to a community alias by referring to the alias name in its

spec.communities
field.

Expand
Table 31.6. CommunityAlias
FieldTypeDescription

name

string

The name of the alias for the

community
.

value

string

The BGP

community
value corresponding to the given name.

31.7.2. Configuring MetalLB with a BGP advertisement and community alias

Configure MetalLB as follows so that the

IPAddressPool
is advertised with the BGP protocol and the community alias set to the numeric value of the NO_ADVERTISE community.

In the following example, the peer BGP router

doc-example-peer-community
receives one
203.0.113.200/32
route and one
fc00:f853:ccd:e799::1/128
route for each load-balancer IP address that MetalLB assigns to a service. A community alias is configured with the
NO_ADVERTISE
community.

Prerequisites

  • Install the OpenShift CLI (
    oc
    ).
  • Log in as a user with
    cluster-admin
    privileges.

Procedure

  1. Create an IP address pool.

    1. Create a file, such as

      ipaddresspool.yaml
      , with content like the following example:

      apiVersion: metallb.io/v1beta1
      kind: IPAddressPool
      metadata:
        namespace: metallb-system
        name: doc-example-bgp-community
      spec:
        addresses:
          - 203.0.113.200/30
          - fc00:f853:ccd:e799::/124
    2. Apply the configuration for the IP address pool:

      $ oc apply -f ipaddresspool.yaml
  2. Create a community alias named

    community1
    .

    apiVersion: metallb.io/v1beta1
    kind: Community
    metadata:
      name: community1
      namespace: metallb-system
    spec:
      communities:
        - name: NO_ADVERTISE
          value: '65535:65282'
  3. Create a BGP peer named

    doc-example-bgp-peer
    .

    1. Create a file, such as

      bgppeer.yaml
      , with content like the following example:

      apiVersion: metallb.io/v1beta2
      kind: BGPPeer
      metadata:
        namespace: metallb-system
        name: doc-example-bgp-peer
      spec:
        peerAddress: 10.0.0.1
        peerASN: 64501
        myASN: 64500
        routerID: 10.10.10.10
    2. Apply the configuration for the BGP peer:

      $ oc apply -f bgppeer.yaml
  4. Create a BGP advertisement with the community alias.

    1. Create a file, such as

      bgpadvertisement.yaml
      , with content like the following example:

      apiVersion: metallb.io/v1beta1
      kind: BGPAdvertisement
      metadata:
        name: bgp-community-sample
        namespace: metallb-system
      spec:
        aggregationLength: 32
        aggregationLengthV6: 128
        communities:
          - NO_ADVERTISE 
      1
      
        ipAddressPools:
          - doc-example-bgp-community
        peers:
          - doc-example-peer
      1 1
      Specify the CommunityAlias.name here and not the community custom resource (CR) name.
    2. Apply the configuration:

      $ oc apply -f bgpadvertisement.yaml

31.8. Configuring MetalLB BFD profiles

As a cluster administrator, you can add, modify, and delete Bidirectional Forwarding Detection (BFD) profiles. The MetalLB Operator uses the BFD profile custom resources to identify which BGP sessions use BFD to provide faster path failure detection than BGP alone provides.

31.8.1. About the BFD profile custom resource

The fields for the BFD profile custom resource are described in the following table.

Expand
Table 31.7. BFD profile custom resource
FieldTypeDescription

metadata.name

string

Specifies the name for the BFD profile custom resource.

metadata.namespace

string

Specifies the namespace for the BFD profile custom resource.

spec.detectMultiplier

integer

Specifies the detection multiplier to determine packet loss. The remote transmission interval is multiplied by this value to determine the connection loss detection timer.

For example, when the local system has the detect multiplier set to

3
and the remote system has the transmission interval set to
300
, the local system detects failures only after
900
ms without receiving packets.

The range is

2
to
255
. The default value is
3
.

spec.echoMode

boolean

Specifies the echo transmission mode. If you are not using distributed BFD, echo transmission mode works only when the peer is also FRR. The default value is

false
and echo transmission mode is disabled.

When echo transmission mode is enabled, consider increasing the transmission interval of control packets to reduce bandwidth usage. For example, consider increasing the transmit interval to

2000
ms.

spec.echoInterval

integer

Specifies the minimum transmission interval, less jitter, that this system uses to send and receive echo packets. The range is

10
to
60000
. The default value is
50
ms.

spec.minimumTtl

integer

Specifies the minimum expected TTL for an incoming control packet. This field applies to multi-hop sessions only.

The purpose of setting a minimum TTL is to make the packet validation requirements more stringent and avoid receiving control packets from other sessions.

The default value is

254
and indicates that the system expects only one hop between this system and the peer.

spec.passiveMode

boolean

Specifies whether a session is marked as active or passive. A passive session does not attempt to start the connection. Instead, a passive session waits for control packets from a peer before it begins to reply.

Marking a session as passive is useful when you have a router that acts as the central node of a star network and you want to avoid sending control packets that you do not need the system to send.

The default value is

false
and marks the session as active.

spec.receiveInterval

integer

Specifies the minimum interval that this system is capable of receiving control packets. The range is

10
to
60000
. The default value is
300
ms.

spec.transmitInterval

integer

Specifies the minimum transmission interval, less jitter, that this system uses to send control packets. The range is

10
to
60000
. The default value is
300
ms.

31.8.2. Configuring a BFD profile

As a cluster administrator, you can add a BFD profile and configure a BGP peer to use the profile. BFD provides faster path failure detection than BGP alone.

Prerequisites

  • Install the OpenShift CLI (
    oc
    ).
  • Log in as a user with
    cluster-admin
    privileges.

Procedure

  1. Create a file, such as

    bfdprofile.yaml
    , with content like the following example:

    apiVersion: metallb.io/v1beta1
    kind: BFDProfile
    metadata:
      name: doc-example-bfd-profile-full
      namespace: metallb-system
    spec:
      receiveInterval: 300
      transmitInterval: 300
      detectMultiplier: 3
      echoMode: false
      passiveMode: true
      minimumTtl: 254
  2. Apply the configuration for the BFD profile:

    $ oc apply -f bfdprofile.yaml

31.8.3. Next steps

31.9. Configuring services to use MetalLB

As a cluster administrator, when you add a service of type

LoadBalancer
, you can control how MetalLB assigns an IP address.

31.9.1. Request a specific IP address

Like some other load-balancer implementations, MetalLB accepts the

spec.loadBalancerIP
field in the service specification.

If the requested IP address is within a range from any address pool, MetalLB assigns the requested IP address. If the requested IP address is not within any range, MetalLB reports a warning.

Example service YAML for a specific IP address

apiVersion: v1
kind: Service
metadata:
  name: <service_name>
  annotations:
    metallb.universe.tf/address-pool: <address_pool_name>
spec:
  selector:
    <label_key>: <label_value>
  ports:
    - port: 8080
      targetPort: 8080
      protocol: TCP
  type: LoadBalancer
  loadBalancerIP: <ip_address>

If MetalLB cannot assign the requested IP address, the

EXTERNAL-IP
for the service reports
<pending>
and running
oc describe service <service_name>
includes an event like the following example.

Example event when MetalLB cannot assign a requested IP address

  ...
Events:
  Type     Reason            Age    From                Message
  ----     ------            ----   ----                -------
  Warning  AllocationFailed  3m16s  metallb-controller  Failed to allocate IP for "default/invalid-request": "4.3.2.1" is not allowed in config

31.9.2. Request an IP address from a specific pool

To assign an IP address from a specific range, but you are not concerned with the specific IP address, then you can use the

metallb.universe.tf/address-pool
annotation to request an IP address from the specified address pool.

Example service YAML for an IP address from a specific pool

apiVersion: v1
kind: Service
metadata:
  name: <service_name>
  annotations:
    metallb.universe.tf/address-pool: <address_pool_name>
spec:
  selector:
    <label_key>: <label_value>
  ports:
    - port: 8080
      targetPort: 8080
      protocol: TCP
  type: LoadBalancer

If the address pool that you specify for

<address_pool_name>
does not exist, MetalLB attempts to assign an IP address from any pool that permits automatic assignment.

31.9.3. Accept any IP address

By default, address pools are configured to permit automatic assignment. MetalLB assigns an IP address from these address pools.

To accept any IP address from any pool that is configured for automatic assignment, no special annotation or configuration is required.

Example service YAML for accepting any IP address

apiVersion: v1
kind: Service
metadata:
  name: <service_name>
spec:
  selector:
    <label_key>: <label_value>
  ports:
    - port: 8080
      targetPort: 8080
      protocol: TCP
  type: LoadBalancer

31.9.4. Share a specific IP address

By default, services do not share IP addresses. However, if you need to colocate services on a single IP address, you can enable selective IP sharing by adding the

metallb.universe.tf/allow-shared-ip
annotation to the services.

apiVersion: v1
kind: Service
metadata:
  name: service-http
  annotations:
    metallb.universe.tf/address-pool: doc-example
    metallb.universe.tf/allow-shared-ip: "web-server-svc"  
1

spec:
  ports:
    - name: http
      port: 80  
2

      protocol: TCP
      targetPort: 8080
  selector:
    <label_key>: <label_value>  
3

  type: LoadBalancer
  loadBalancerIP: 172.31.249.7  
4

---
apiVersion: v1
kind: Service
metadata:
  name: service-https
  annotations:
    metallb.universe.tf/address-pool: doc-example
    metallb.universe.tf/allow-shared-ip: "web-server-svc"  
5

spec:
  ports:
    - name: https
      port: 443  
6

      protocol: TCP
      targetPort: 8080
  selector:
    <label_key>: <label_value>  
7

  type: LoadBalancer
  loadBalancerIP: 172.31.249.7  
8
1 5
Specify the same value for the metallb.universe.tf/allow-shared-ip annotation. This value is referred to as the sharing key.
2 6
Specify different port numbers for the services.
3 7
Specify identical pod selectors if you must specify externalTrafficPolicy: local so the services send traffic to the same set of pods. If you use the cluster external traffic policy, then the pod selectors do not need to be identical.
4 8
Optional: If you specify the three preceding items, MetalLB might colocate the services on the same IP address. To ensure that services share an IP address, specify the IP address to share.

By default, Kubernetes does not allow multiprotocol load balancer services. This limitation would normally make it impossible to run a service like DNS that needs to listen on both TCP and UDP. To work around this limitation of Kubernetes with MetalLB, create two services:

  • For one service, specify TCP and for the second service, specify UDP.
  • In both services, specify the same pod selector.
  • Specify the same sharing key and
    spec.loadBalancerIP
    value to colocate the TCP and UDP services on the same IP address.

31.9.5. Configuring a service with MetalLB

You can configure a load-balancing service to use an external IP address from an address pool.

Prerequisites

  • Install the OpenShift CLI (
    oc
    ).
  • Install the MetalLB Operator and start MetalLB.
  • Configure at least one address pool.
  • Configure your network to route traffic from the clients to the host network for the cluster.

Procedure

  1. Create a

    <service_name>.yaml
    file. In the file, ensure that the
    spec.type
    field is set to
    LoadBalancer
    .

    Refer to the examples for information about how to request the external IP address that MetalLB assigns to the service.

  2. Create the service:

    $ oc apply -f <service_name>.yaml

    Example output

    service/<service_name> created

Verification

  • Describe the service:

    $ oc describe service <service_name>

    Example output

    Name:                     <service_name>
    Namespace:                default
    Labels:                   <none>
    Annotations:              metallb.universe.tf/address-pool: doc-example  <.>
    Selector:                 app=service_name
    Type:                     LoadBalancer  <.>
    IP Family Policy:         SingleStack
    IP Families:              IPv4
    IP:                       10.105.237.254
    IPs:                      10.105.237.254
    LoadBalancer Ingress:     192.168.100.5  <.>
    Port:                     <unset>  80/TCP
    TargetPort:               8080/TCP
    NodePort:                 <unset>  30550/TCP
    Endpoints:                10.244.0.50:8080
    Session Affinity:         None
    External Traffic Policy:  Cluster
    Events:  <.>
      Type    Reason        Age                From             Message
      ----    ------        ----               ----             -------
      Normal  nodeAssigned  32m (x2 over 32m)  metallb-speaker  announcing from node "<node_name>"

    <.> The annotation is present if you request an IP address from a specific pool. <.> The service type must indicate

    LoadBalancer
    . <.> The load-balancer ingress field indicates the external IP address if the service is assigned correctly. <.> The events field indicates the node name that is assigned to announce the external IP address. If you experience an error, the events field indicates the reason for the error.

31.10. MetalLB logging, troubleshooting, and support

If you need to troubleshoot MetalLB configuration, see the following sections for commonly used commands.

31.10.1. Setting the MetalLB logging levels

MetalLB uses FRRouting (FRR) in a container with the default setting of

info
generates a lot of logging. You can control the verbosity of the logs generated by setting the
logLevel
as illustrated in this example.

Gain a deeper insight into MetalLB by setting the

logLevel
to
debug
as follows:

Prerequisites

  • You have access to the cluster as a user with the
    cluster-admin
    role.
  • You have installed the OpenShift CLI (
    oc
    ).

Procedure

  1. Create a file, such as

    setdebugloglevel.yaml
    , with content like the following example:

    apiVersion: metallb.io/v1beta1
    kind: MetalLB
    metadata:
      name: metallb
      namespace: metallb-system
    spec:
      logLevel: debug
      nodeSelector:
        node-role.kubernetes.io/worker: ""
  2. Apply the configuration:

    $ oc replace -f setdebugloglevel.yaml
    Note

    Use

    oc replace
    as the understanding is the
    metallb
    CR is already created and here you are changing the log level.

  3. Display the names of the

    speaker
    pods:

    $ oc get -n metallb-system pods -l component=speaker

    Example output

    NAME                    READY   STATUS    RESTARTS   AGE
    speaker-2m9pm           4/4     Running   0          9m19s
    speaker-7m4qw           3/4     Running   0          19s
    speaker-szlmx           4/4     Running   0          9m19s

    Note

    Speaker and controller pods are recreated to ensure the updated logging level is applied. The logging level is modified for all the components of MetalLB.

  4. View the

    speaker
    logs:

    $ oc logs -n metallb-system speaker-7m4qw -c speaker

    Example output

    {"branch":"main","caller":"main.go:92","commit":"3d052535","goversion":"gc / go1.17.1 / amd64","level":"info","msg":"MetalLB speaker starting (commit 3d052535, branch main)","ts":"2022-05-17T09:55:05Z","version":""}
    {"caller":"announcer.go:110","event":"createARPResponder","interface":"ens4","level":"info","msg":"created ARP responder for interface","ts":"2022-05-17T09:55:05Z"}
    {"caller":"announcer.go:119","event":"createNDPResponder","interface":"ens4","level":"info","msg":"created NDP responder for interface","ts":"2022-05-17T09:55:05Z"}
    {"caller":"announcer.go:110","event":"createARPResponder","interface":"tun0","level":"info","msg":"created ARP responder for interface","ts":"2022-05-17T09:55:05Z"}
    {"caller":"announcer.go:119","event":"createNDPResponder","interface":"tun0","level":"info","msg":"created NDP responder for interface","ts":"2022-05-17T09:55:05Z"}
    I0517 09:55:06.515686      95 request.go:665] Waited for 1.026500832s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/operators.coreos.com/v1alpha1?timeout=32s
    {"Starting Manager":"(MISSING)","caller":"k8s.go:389","level":"info","ts":"2022-05-17T09:55:08Z"}
    {"caller":"speakerlist.go:310","level":"info","msg":"node event - forcing sync","node addr":"10.0.128.4","node event":"NodeJoin","node name":"ci-ln-qb8t3mb-72292-7s7rh-worker-a-vvznj","ts":"2022-05-17T09:55:08Z"}
    {"caller":"service_controller.go:113","controller":"ServiceReconciler","enqueueing":"openshift-kube-controller-manager-operator/metrics","epslice":"{\"metadata\":{\"name\":\"metrics-xtsxr\",\"generateName\":\"metrics-\",\"namespace\":\"openshift-kube-controller-manager-operator\",\"uid\":\"ac6766d7-8504-492c-9d1e-4ae8897990ad\",\"resourceVersion\":\"9041\",\"generation\":4,\"creationTimestamp\":\"2022-05-17T07:16:53Z\",\"labels\":{\"app\":\"kube-controller-manager-operator\",\"endpointslice.kubernetes.io/managed-by\":\"endpointslice-controller.k8s.io\",\"kubernetes.io/service-name\":\"metrics\"},\"annotations\":{\"endpoints.kubernetes.io/last-change-trigger-time\":\"2022-05-17T07:21:34Z\"},\"ownerReferences\":[{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"name\":\"metrics\",\"uid\":\"0518eed3-6152-42be-b566-0bd00a60faf8\",\"controller\":true,\"blockOwnerDeletion\":true}],\"managedFields\":[{\"manager\":\"kube-controller-manager\",\"operation\":\"Update\",\"apiVersion\":\"discovery.k8s.io/v1\",\"time\":\"2022-05-17T07:20:02Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:addressType\":{},\"f:endpoints\":{},\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:endpoints.kubernetes.io/last-change-trigger-time\":{}},\"f:generateName\":{},\"f:labels\":{\".\":{},\"f:app\":{},\"f:endpointslice.kubernetes.io/managed-by\":{},\"f:kubernetes.io/service-name\":{}},\"f:ownerReferences\":{\".\":{},\"k:{\\\"uid\\\":\\\"0518eed3-6152-42be-b566-0bd00a60faf8\\\"}\":{}}},\"f:ports\":{}}}]},\"addressType\":\"IPv4\",\"endpoints\":[{\"addresses\":[\"10.129.0.7\"],\"conditions\":{\"ready\":true,\"serving\":true,\"terminating\":false},\"targetRef\":{\"kind\":\"Pod\",\"namespace\":\"openshift-kube-controller-manager-operator\",\"name\":\"kube-controller-manager-operator-6b98b89ddd-8d4nf\",\"uid\":\"dd5139b8-e41c-4946-a31b-1a629314e844\",\"resourceVersion\":\"9038\"},\"nodeName\":\"ci-ln-qb8t3mb-72292-7s7rh-master-0\",\"zone\":\"us-central1-a\"}],\"ports\":[{\"name\":\"https\",\"protocol\":\"TCP\",\"port\":8443}]}","level":"debug","ts":"2022-05-17T09:55:08Z"}

  5. View the FRR logs:

    $ oc logs -n metallb-system speaker-7m4qw -c frr

    Example output

    Started watchfrr
    2022/05/17 09:55:05 ZEBRA: client 16 says hello and bids fair to announce only bgp routes vrf=0
    2022/05/17 09:55:05 ZEBRA: client 31 says hello and bids fair to announce only vnc routes vrf=0
    2022/05/17 09:55:05 ZEBRA: client 38 says hello and bids fair to announce only static routes vrf=0
    2022/05/17 09:55:05 ZEBRA: client 43 says hello and bids fair to announce only bfd routes vrf=0
    2022/05/17 09:57:25.089 BGP: Creating Default VRF, AS 64500
    2022/05/17 09:57:25.090 BGP: dup addr detect enable max_moves 5 time 180 freeze disable freeze_time 0
    2022/05/17 09:57:25.090 BGP: bgp_get: Registering BGP instance (null) to zebra
    2022/05/17 09:57:25.090 BGP: Registering VRF 0
    2022/05/17 09:57:25.091 BGP: Rx Router Id update VRF 0 Id 10.131.0.1/32
    2022/05/17 09:57:25.091 BGP: RID change : vrf VRF default(0), RTR ID 10.131.0.1
    2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF br0
    2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF ens4
    2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF ens4 addr 10.0.128.4/32
    2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF ens4 addr fe80::c9d:84da:4d86:5618/64
    2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF lo
    2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF ovs-system
    2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF tun0
    2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF tun0 addr 10.131.0.1/23
    2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF tun0 addr fe80::40f1:d1ff:feb6:5322/64
    2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF veth2da49fed
    2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF veth2da49fed addr fe80::24bd:d1ff:fec1:d88/64
    2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF veth2fa08c8c
    2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF veth2fa08c8c addr fe80::6870:ff:fe96:efc8/64
    2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF veth41e356b7
    2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF veth41e356b7 addr fe80::48ff:37ff:fede:eb4b/64
    2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF veth1295c6e2
    2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF veth1295c6e2 addr fe80::b827:a2ff:feed:637/64
    2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF veth9733c6dc
    2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF veth9733c6dc addr fe80::3cf4:15ff:fe11:e541/64
    2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF veth336680ea
    2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF veth336680ea addr fe80::94b1:8bff:fe7e:488c/64
    2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vetha0a907b7
    2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vetha0a907b7 addr fe80::3855:a6ff:fe73:46c3/64
    2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vethf35a4398
    2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vethf35a4398 addr fe80::40ef:2fff:fe57:4c4d/64
    2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vethf831b7f4
    2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vethf831b7f4 addr fe80::f0d9:89ff:fe7c:1d32/64
    2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vxlan_sys_4789
    2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vxlan_sys_4789 addr fe80::80c1:82ff:fe4b:f078/64
    2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] Timer (start timer expire).
    2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] BGP_Start (Idle->Connect), fd -1
    2022/05/17 09:57:26.094 BGP: Allocated bnc 10.0.0.1/32(0)(VRF default) peer 0x7f807f7631a0
    2022/05/17 09:57:26.094 BGP: sendmsg_zebra_rnh: sending cmd ZEBRA_NEXTHOP_REGISTER for 10.0.0.1/32 (vrf VRF default)
    2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] Waiting for NHT
    2022/05/17 09:57:26.094 BGP: bgp_fsm_change_status : vrf default(0), Status: Connect established_peers 0
    2022/05/17 09:57:26.094 BGP: 10.0.0.1 went from Idle to Connect
    2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] TCP_connection_open_failed (Connect->Active), fd -1
    2022/05/17 09:57:26.094 BGP: bgp_fsm_change_status : vrf default(0), Status: Active established_peers 0
    2022/05/17 09:57:26.094 BGP: 10.0.0.1 went from Connect to Active
    2022/05/17 09:57:26.094 ZEBRA: rnh_register msg from client bgp: hdr->length=8, type=nexthop vrf=0
    2022/05/17 09:57:26.094 ZEBRA: 0: Add RNH 10.0.0.1/32 type Nexthop
    2022/05/17 09:57:26.094 ZEBRA: 0:10.0.0.1/32: Evaluate RNH, type Nexthop (force)
    2022/05/17 09:57:26.094 ZEBRA: 0:10.0.0.1/32: NH has become unresolved
    2022/05/17 09:57:26.094 ZEBRA: 0: Client bgp registers for RNH 10.0.0.1/32 type Nexthop
    2022/05/17 09:57:26.094 BGP: VRF default(0): Rcvd NH update 10.0.0.1/32(0) - metric 0/0 #nhops 0/0 flags 0x6
    2022/05/17 09:57:26.094 BGP: NH update for 10.0.0.1/32(0)(VRF default) - flags 0x6 chgflags 0x0 - evaluate paths
    2022/05/17 09:57:26.094 BGP: evaluate_paths: Updating peer (10.0.0.1(VRF default)) status with NHT
    2022/05/17 09:57:30.081 ZEBRA: Event driven route-map update triggered
    2022/05/17 09:57:30.081 ZEBRA: Event handler for route-map: 10.0.0.1-out
    2022/05/17 09:57:30.081 ZEBRA: Event handler for route-map: 10.0.0.1-in
    2022/05/17 09:57:31.104 ZEBRA: netlink_parse_info: netlink-listen (NS 0) type RTM_NEWNEIGH(28), len=76, seq=0, pid=0
    2022/05/17 09:57:31.104 ZEBRA: 	Neighbor Entry received is not on a VLAN or a BRIDGE, ignoring
    2022/05/17 09:57:31.105 ZEBRA: netlink_parse_info: netlink-listen (NS 0) type RTM_NEWNEIGH(28), len=76, seq=0, pid=0
    2022/05/17 09:57:31.105 ZEBRA: 	Neighbor Entry received is not on a VLAN or a BRIDGE, ignoring

31.10.1.1. FRRouting (FRR) log levels

The following table describes the FRR logging levels.

Expand
Table 31.8. Log levels
Log levelDescription

all

Supplies all logging information for all logging levels.

debug

Information that is diagnostically helpful to people. Set to

debug
to give detailed troubleshooting information.

info

Provides information that always should be logged but under normal circumstances does not require user intervention. This is the default logging level.

warn

Anything that can potentially cause inconsistent

MetalLB
behaviour. Usually
MetalLB
automatically recovers from this type of error.

error

Any error that is fatal to the functioning of

MetalLB
. These errors usually require administrator intervention to fix.

none

Turn off all logging.

31.10.2. Troubleshooting BGP issues

The BGP implementation that Red Hat supports uses FRRouting (FRR) in a container in the

speaker
pods. As a cluster administrator, if you need to troubleshoot BGP configuration issues, you need to run commands in the FRR container.

Prerequisites

  • You have access to the cluster as a user with the
    cluster-admin
    role.
  • You have installed the OpenShift CLI (
    oc
    ).

Procedure

  1. Display the names of the

    speaker
    pods:

    $ oc get -n metallb-system pods -l component=speaker

    Example output

    NAME            READY   STATUS    RESTARTS   AGE
    speaker-66bth   4/4     Running   0          56m
    speaker-gvfnf   4/4     Running   0          56m
    ...

  2. Display the running configuration for FRR:

    $ oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c "show running-config"

    Example output

    Building configuration...
    
    Current configuration:
    !
    frr version 7.5.1_git
    frr defaults traditional
    hostname some-hostname
    log file /etc/frr/frr.log informational
    log timestamp precision 3
    service integrated-vtysh-config
    !
    router bgp 64500  
    1
    
     bgp router-id 10.0.1.2
     no bgp ebgp-requires-policy
     no bgp default ipv4-unicast
     no bgp network import-check
     neighbor 10.0.2.3 remote-as 64500  
    2
    
     neighbor 10.0.2.3 bfd profile doc-example-bfd-profile-full  
    3
    
     neighbor 10.0.2.3 timers 5 15
     neighbor 10.0.2.4 remote-as 64500  
    4
    
     neighbor 10.0.2.4 bfd profile doc-example-bfd-profile-full  
    5
    
     neighbor 10.0.2.4 timers 5 15
     !
     address-family ipv4 unicast
      network 203.0.113.200/30   
    6
    
      neighbor 10.0.2.3 activate
      neighbor 10.0.2.3 route-map 10.0.2.3-in in
      neighbor 10.0.2.4 activate
      neighbor 10.0.2.4 route-map 10.0.2.4-in in
     exit-address-family
     !
     address-family ipv6 unicast
      network fc00:f853:ccd:e799::/124  
    7
    
      neighbor 10.0.2.3 activate
      neighbor 10.0.2.3 route-map 10.0.2.3-in in
      neighbor 10.0.2.4 activate
      neighbor 10.0.2.4 route-map 10.0.2.4-in in
     exit-address-family
    !
    route-map 10.0.2.3-in deny 20
    !
    route-map 10.0.2.4-in deny 20
    !
    ip nht resolve-via-default
    !
    ipv6 nht resolve-via-default
    !
    line vty
    !
    bfd
     profile doc-example-bfd-profile-full  
    8
    
      transmit-interval 35
      receive-interval 35
      passive-mode
      echo-mode
      echo-interval 35
      minimum-ttl 10
     !
    !
    end

    <.> The

    router bgp
    section indicates the ASN for MetalLB. <.> Confirm that a
    neighbor <ip-address> remote-as <peer-ASN>
    line exists for each BGP peer custom resource that you added. <.> If you configured BFD, confirm that the BFD profile is associated with the correct BGP peer and that the BFD profile appears in the command output. <.> Confirm that the
    network <ip-address-range>
    lines match the IP address ranges that you specified in address pool custom resources that you added.

  3. Display the BGP summary:

    $ oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c "show bgp summary"

    Example output

    IPv4 Unicast Summary:
    BGP router identifier 10.0.1.2, local AS number 64500 vrf-id 0
    BGP table version 1
    RIB entries 1, using 192 bytes of memory
    Peers 2, using 29 KiB of memory
    
    Neighbor        V         AS   MsgRcvd   MsgSent   TblVer  InQ OutQ  Up/Down State/PfxRcd   PfxSnt
    10.0.2.3        4      64500       387       389        0    0    0 00:32:02            0        1  
    1
    
    10.0.2.4        4      64500         0         0        0    0    0    never       Active        0  
    2
    
    
    Total number of neighbors 2
    
    IPv6 Unicast Summary:
    BGP router identifier 10.0.1.2, local AS number 64500 vrf-id 0
    BGP table version 1
    RIB entries 1, using 192 bytes of memory
    Peers 2, using 29 KiB of memory
    
    Neighbor        V         AS   MsgRcvd   MsgSent   TblVer  InQ OutQ  Up/Down State/PfxRcd   PfxSnt
    10.0.2.3        4      64500       387       389        0    0    0 00:32:02 NoNeg  
    3
    
    10.0.2.4        4      64500         0         0        0    0    0    never       Active        0  
    4
    
    
    Total number of neighbors 2

    1 1 3
    Confirm that the output includes a line for each BGP peer custom resource that you added.
    2 4 2 4
    Output that shows 0 messages received and messages sent indicates a BGP peer that does not have a BGP session. Check network connectivity and the BGP configuration of the BGP peer.
  4. Display the BGP peers that received an address pool:

    $ oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c "show bgp ipv4 unicast 203.0.113.200/30"

    Replace

    ipv4
    with
    ipv6
    to display the BGP peers that received an IPv6 address pool. Replace
    203.0.113.200/30
    with an IPv4 or IPv6 IP address range from an address pool.

    Example output

    BGP routing table entry for 203.0.113.200/30
    Paths: (1 available, best #1, table default)
      Advertised to non peer-group peers:
      10.0.2.3  <.>
      Local
        0.0.0.0 from 0.0.0.0 (10.0.1.2)
          Origin IGP, metric 0, weight 32768, valid, sourced, local, best (First path received)
          Last update: Mon Jan 10 19:49:07 2022

    <.> Confirm that the output includes an IP address for a BGP peer.

31.10.3. Troubleshooting BFD issues

The Bidirectional Forwarding Detection (BFD) implementation that Red Hat supports uses FRRouting (FRR) in a container in the

speaker
pods. The BFD implementation relies on BFD peers also being configured as BGP peers with an established BGP session. As a cluster administrator, if you need to troubleshoot BFD configuration issues, you need to run commands in the FRR container.

Prerequisites

  • You have access to the cluster as a user with the
    cluster-admin
    role.
  • You have installed the OpenShift CLI (
    oc
    ).

Procedure

  1. Display the names of the

    speaker
    pods:

    $ oc get -n metallb-system pods -l component=speaker

    Example output

    NAME            READY   STATUS    RESTARTS   AGE
    speaker-66bth   4/4     Running   0          26m
    speaker-gvfnf   4/4     Running   0          26m
    ...

  2. Display the BFD peers:

    $ oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c "show bfd peers brief"

    Example output

    Session count: 2
    SessionId  LocalAddress              PeerAddress              Status
    =========  ============              ===========              ======
    3909139637 10.0.1.2                  10.0.2.3                 up  <.>

    <.> Confirm that the

    PeerAddress
    column includes each BFD peer. If the output does not list a BFD peer IP address that you expected the output to include, troubleshoot BGP connectivity with the peer. If the status field indicates
    down
    , check for connectivity on the links and equipment between the node and the peer. You can determine the node name for the speaker pod with a command like
    oc get pods -n metallb-system speaker-66bth -o jsonpath='{.spec.nodeName}'
    .

31.10.4. MetalLB metrics for BGP and BFD

OpenShift Container Platform captures the following Prometheus metrics for MetalLB that relate to BGP peers and BFD profiles.

  • metallb_bfd_control_packet_input
    counts the number of BFD control packets received from each BFD peer.
  • metallb_bfd_control_packet_output
    counts the number of BFD control packets sent to each BFD peer.
  • metallb_bfd_echo_packet_input
    counts the number of BFD echo packets received from each BFD peer.
  • metallb_bfd_echo_packet_output
    counts the number of BFD echo packets sent to each BFD peer.
  • metallb_bfd_session_down_events
    counts the number of times the BFD session with a peer entered the
    down
    state.
  • metallb_bfd_session_up
    indicates the connection state with a BFD peer.
    1
    indicates the session is
    up
    and
    0
    indicates the session is
    down
    .
  • metallb_bfd_session_up_events
    counts the number of times the BFD session with a peer entered the
    up
    state.
  • metallb_bfd_zebra_notifications
    counts the number of BFD Zebra notifications for each BFD peer.
  • metallb_bgp_announced_prefixes_total
    counts the number of load balancer IP address prefixes that are advertised to BGP peers. The terms prefix and aggregated route have the same meaning.
  • metallb_bgp_session_up
    indicates the connection state with a BGP peer.
    1
    indicates the session is
    up
    and
    0
    indicates the session is
    down
    .
  • metallb_bgp_updates_total
    counts the number of BGP
    update
    messages that were sent to a BGP peer.

Additional resources

31.10.5. About collecting MetalLB data

You can use the

oc adm must-gather
CLI command to collect information about your cluster, your MetalLB configuration, and the MetalLB Operator. The following features and objects are associated with MetalLB and the MetalLB Operator:

  • The namespace and child objects that the MetalLB Operator is deployed in
  • All MetalLB Operator custom resource definitions (CRDs)

The

oc adm must-gather
CLI command collects the following information from FRRouting (FRR) that Red Hat uses to implement BGP and BFD:

  • /etc/frr/frr.conf
  • /etc/frr/frr.log
  • /etc/frr/daemons
    configuration file
  • /etc/frr/vtysh.conf

The log and configuration files in the preceding list are collected from the

frr
container in each
speaker
pod.

In addition to the log and configuration files, the

oc adm must-gather
CLI command collects the output from the following
vtysh
commands:

  • show running-config
  • show bgp ipv4
  • show bgp ipv6
  • show bgp neighbor
  • show bfd peer

No additional configuration is required when you run the

oc adm must-gather
CLI command.

Additional resources

Red Hat logoGithubredditYoutubeTwitter

Formazione

Prova, acquista e vendi

Community

Informazioni sulla documentazione di Red Hat

Aiutiamo gli utenti Red Hat a innovarsi e raggiungere i propri obiettivi con i nostri prodotti e servizi grazie a contenuti di cui possono fidarsi. Esplora i nostri ultimi aggiornamenti.

Rendiamo l’open source più inclusivo

Red Hat si impegna a sostituire il linguaggio problematico nel codice, nella documentazione e nelle proprietà web. Per maggiori dettagli, visita il Blog di Red Hat.

Informazioni su Red Hat

Forniamo soluzioni consolidate che rendono più semplice per le aziende lavorare su piattaforme e ambienti diversi, dal datacenter centrale all'edge della rete.

Theme

© 2026 Red Hat
Torna in cima