Questo contenuto non è disponibile nella lingua selezionata.

Chapter 3. Network bonding considerations


You can use network bonding, also known as link aggregration, to combine many network interfaces into a single, logical interface. This means that you can use different modes for handling how network traffic distributes across bonded interfaces. Each mode provides fault tolerance and some modes provide load balancing capabilities to your network. Red Hat supports Open vSwitch (OVS) bonding and kernel bonding.

3.1. Open vSwitch (OVS) bonding

With an OVS bonding configuration, you create a single, logical interface by connecting each physical network interface controller (NIC) as a port to a specific bond. This single bond then handles all network traffic, effectively replacing the function of individual interfaces.

Consider the following architectural layout for OVS bridges that interact with OVS interfaces:

  • A network interface uses a bridge Media Access Control (MAC) address for managing protocol-level traffic and other administrative tasks, such as IP address assignment.
  • The physical MAC addresses of physical interfaces do not handle traffic.
  • OVS handles all MAC address management at the OVS bridge level.

This layout simplifies bond interface management as bonds act as data paths, where centralized MAC address management happens at the OVS bridge level.

For OVS bonding, you can select either active-backup mode or balance-slb mode. A bonding mode specifies the policy for how bond interfaces get used during network transmission.

3.1.1. Enable active-backup mode for your cluster

The active-backup mode provides fault tolerance for network connections by switching to a backup link where the primary link fails.

The mode specifies the following ports for your cluster:

  • An active port where one physical interface sends and receives traffic at any given time.
  • A standby port where all other ports act as backup links and continously monitor their link status.

During a failover process, if an active port or its link fails, the bonding logic switches all network traffic to a standby port. This standby port becomes the new active port. For failover to work, all ports in a bond must share the same Media Access Control (MAC) address.

3.1.2. Enabling OVS balance-slb mode for your cluster

You can enable the Open vSwitch (OVS) balance-slb mode so that two or more physical interfaces can share their network traffic. A balance-slb mode interface can give source load balancing (SLB) capabilities to a cluster that runs virtualization workloads, without requiring load balancing negotiation with the network switch.

Currently, source load balancing runs on a bond interface, where the interface connects to an auxiliary bridge, such as br-phy. Source load balancing balances only across different Media Access Control (MAC) address and virtual local area network (VLAN) combinations. Note that all OVN-Kubernetes pod traffic uses the same MAC address and VLAN, so this traffic cannot be load balanced across many physical interfaces.

The following diagram shows balance-slb mode on a simple cluster infrastructure layout. Virtual machines (VMs) connect to specific localnet NetworkAttachmentDefinition (NAD) custom resource definition (CRDs), NAD 0 or NAD 1. Each NAD provides VMs with access to the underlying physical network, supporting VLAN-tagged or untagged traffic. A br-ex OVS bridge receives traffic from VMs and passes the traffic to the next OVS bridge, br-phy. The br-phy bridge functions as the controller for the SLB bond. The SLB bond balances traffic from different VM ports over the physical interface links, such as eno0 and eno1. Additionally, ingress traffic from either physical interface can pass through the set of OVS bridges to reach the VMs.

Figure 3.1. OVS balance-slb mode operating on a localnet with two NADs

You can integrate the balance-slb mode interface into primary or secondary network types by using OVS bonding. Note the following points about OVS bonding:

  • Supports the OVN-Kubernetes CNI plugin and easily integrates with the plugin.
  • Natively supports balance-slb mode.

Prerequisites

  • You have more than one physical interface attached to your primary network and you defined the interfaces in a MachineConfig file.
  • You created a manifest object and defined a customized br-ex bridge in the object configuration file.
  • You have more than one physical interfaces attached to your primary network and you defined the interfaces in a NAD CRD file.

Procedure

  1. For each bare-metal host that exists in a cluster, in the install-config.yaml file for your cluster define a networkConfig section similar to the following example:

    # ...
    networkConfig:
      interfaces:
        - name: enp1s0 
    1
    
          type: ethernet
          state: up
          ipv4:
            dhcp: true
            enabled: true
          ipv6:
            enabled: false
        - name: enp2s0 
    2
    
          type: ethernet
          state: up
          mtu: 1500 
    3
    
          ipv4:
            dhcp: true
            enabled: true
          ipv6:
            dhcp: true
            enabled: true
        - name: enp3s0 
    4
    
          type: ethernet
          state: up
          mtu: 1500
          ipv4:
            enabled: false
          ipv6:
            enabled: false
    # ...
    Copy to Clipboard Toggle word wrap
    1
    The interface for the provisioned network interface controller (NIC).
    2
    The first bonded interface that pulls in the Ignition config file for the bond interface.
    3
    Manually set the br-ex maximum transmission unit (MTU) on the bond ports.
    4
    The second bonded interface is part of a minimal configuration that pulls ignition during cluster installation.
  2. Define each network interface in an NMState configuration file:

    Example NMState configuration file that defines many network interfaces

    ovn:
      bridge-mappings:
        - localnet: localnet-network
          bridge: br-ex
          state: present
    interfaces:
      - name: br-ex
        type: ovs-bridge
        state: up
        bridge:
          allow-extra-patch-ports: true
          port:
            - name: br-ex
            - name: patch-ex-to-phy
        ovs-db:
          external_ids:
            bridge-uplink: "patch-ex-to-phy"
      - name: br-ex
        type: ovs-interface
        state: up
        mtu: 1500 
    1
    
        ipv4:
          enabled: true
          dhcp: true
          auto-route-metric: 48
        ipv6:
          enabled: false
          dhcp: false
          auto-route-metric: 48
      - name: br-phy
        type: ovs-bridge
        state: up
        bridge:
          allow-extra-patch-ports: true
          port:
            - name: patch-phy-to-ex
            - name: ovs-bond
              link-aggregation:
                mode: balance-slb
                port:
                  - name: enp2s0
                  - name: enp3s0
      - name: patch-ex-to-phy
        type: ovs-interface
        state: up
        patch:
          peer: patch-phy-to-ex
      - name: patch-phy-to-ex
        type: ovs-interface
        state: up
        patch:
          peer: patch-ex-to-phy
      - name: enp1s0
        type: ethernet
        state: up
        ipv4:
          dhcp: true
          enabled: true
        ipv6:
          enabled: false
      - name: enp2s0
        type: ethernet
        state: up
        mtu: 1500
        ipv4:
          enabled: false
        ipv6:
          enabled: false
      - name: enp3s0
        type: ethernet
        state: up
        mtu: 1500
        ipv4:
          enabled: false
        ipv6:
          enabled: false
    # ...
    Copy to Clipboard Toggle word wrap

    1
    Manually set the br-ex MTU on the bond ports.
  3. Use the base64 command to encode the interface content of the NMState configuration file:

    $ base64 -w0  <nmstate_configuration>.yml 
    1
    Copy to Clipboard Toggle word wrap
    1
    Where the -w0 option prevents line wrapping during the base64 encoding operation.
  4. Create MachineConfig manifest files for the master role and the worker role. Ensure that you embed the base64-encoded string from an earlier command into each MachineConfig manifest file. The following example manifest file configures the master role for all nodes that exist in a cluster. You can also create a manifest file for master and worker roles specific to a node.

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: master
      name: 10-br-ex-master 
    1
    
    spec:
      config:
        ignition:
          version: 3.2.0
        storage:
          files:
          - contents:
              source: data:text/plain;charset=utf-8;base64,<base64_encoded_nmstate_configuration> 
    2
    
            mode: 0644
            overwrite: true
            path: /etc/nmstate/openshift/cluster.yml 
    3
    Copy to Clipboard Toggle word wrap
    1
    The name of the policy.
    2
    Writes the encoded base64 information to the specified path.
    3
    Specify the path to the cluster.yml file. For each node in your cluster, you can specify the short hostname path to your node, such as <node_short_hostname>.yml.
  5. Save each MachineConfig manifest file to the ./<installation_directory>/manifests directory, where <installation_directory> is the directory in which the installation program creates files.

    The Machine Config Operator (MCO) takes the content from each manifest file and consistently applies the content to all selected nodes during a rolling update.

3.2. Kernel bonding

You can use kernel bonding, which is a built-in Linux kernel function where link aggregation can exist among many Ethernet interfaces, to create a single logical physical interface. Kernel bonding allows multiple network interfaces to be combined into a single logical interface, which can enhance network performance by increasing bandwidth and providing redundancy in case of a link failure.

Kernel bonding is the default mode if no bond interfaces depend on OVS bonds. This bonding type does not give the same level of customization as supported OVS bonding.

For kernel-bonding mode, the bond interfaces exist outside, which means they are not in the data path, of the bridge interface. Network traffic in this mode is not sent or received on the bond interface port but instead requires additional bridging capabilities for MAC address assignment at the kernel level.

If you enabled kernel-bonding mode on network controller interfaces (NICs) for your nodes, you must specify a Media Access Control (MAC) address failover. This configuration prevents node communication issues with the bond interfaces, such as eno1f0 and eno2f0.

Red Hat supports only the following value for the fail_over_mac parameter:

  • 0: Specifies the none value, which disables MAC address failover so that all interfaces receive the same MAC address as the bond interface. This is the default value.

Red Hat does not support the following values for the fail_over_mac parameter:

  • 1: Specifies the active value and sets the MAC address of the primary bond interface to always remain the same as active interfaces. If during a failover, the MAC address of an interface changes, the MAC address of the bond interface changes to match the new MAC address of the interface.
  • 2: Specifies the follow value so that during a failover, an active interface gets the MAC address of the bond interface and a formerly active interface receives the MAC address of the newly active interface.
Torna in cima
Red Hat logoGithubredditYoutubeTwitter

Formazione

Prova, acquista e vendi

Community

Informazioni sulla documentazione di Red Hat

Aiutiamo gli utenti Red Hat a innovarsi e raggiungere i propri obiettivi con i nostri prodotti e servizi grazie a contenuti di cui possono fidarsi. Esplora i nostri ultimi aggiornamenti.

Rendiamo l’open source più inclusivo

Red Hat si impegna a sostituire il linguaggio problematico nel codice, nella documentazione e nelle proprietà web. Per maggiori dettagli, visita il Blog di Red Hat.

Informazioni su Red Hat

Forniamo soluzioni consolidate che rendono più semplice per le aziende lavorare su piattaforme e ambienti diversi, dal datacenter centrale all'edge della rete.

Theme

© 2025 Red Hat