6.5. Bond Devices


6.5.1. Bonding Methods

You can bond compatible network devices together. This type of configuration can increase available bandwidth and reliability.

Bonding must be enabled for the ports of the switch used by the host. The process by which bonding is enabled is slightly different for each switch; consult the manual provided by your switch vendor for detailed information on how to enable bonding.

Note

For a bond in Mode 4, all slaves must be configured properly on the switch. Otherwise, the ad_partner_mac is 00:00:00:00:00:00 and the Manager displays a warning exclamation mark icon on the bond in the Network Interfaces tab. No warning is provided if any of the slaves are up and running.

There are two methods for creating bond devices:

6.5.2. Creating a Bond Device Using the Administration Portal

Using the Administration Portal, you can bond multiple network interfaces, pre-existing bond devices, and combinations of the two. A bond can carry both VLAN-tagged and non-VLAN-tagged traffic.

Note

If the switch has been configured to provide Link Layer Discovery Protocol (LLDP) information, you can hover your cursor over a physical network interface to view the switch port’s current aggregation configuration. Red Hat recommends checking the configuration prior to creating a bond device.

Procedure

  1. Click Compute Hosts.
  2. Click the host’s name to open the details view.
  3. Click the Network Interfaces tab to list the physical network interfaces attached to the host.
  4. Click Setup Host Networks.
  5. Optionally, hover your cursor over host network interface to view configuration information provided by the switch.
  6. Select and drag one of the devices over the top of another device and drop it to open the Create New Bond window. Alternatively, right-click the device and select another device from the drop-down menu.

    If the devices are incompatible, the bond operation fails and suggests how to correct the compatibility issue. For information about bonding logic, see Section 6.5.4, “Bonding Logic in Red Hat Virtualization”.

  7. Select the Bond Name and Bonding Mode from the drop-down menus.

    You can select bonding modes 1, 2, 4, and 5. Any other mode can be configured using the Custom option. For more information about bond modes, see Section 6.5.5, “Bonding Modes”.

  8. Click OK to create the bond and close the Create New Bond window.
  9. Assign a logical network to the newly created bond device.
  10. Optionally, select Verify connectivity between Host and Engine and/or Save network configuration.
  11. Click OK.

Your network devices are linked into a bond device and can be edited as a single interface. The bond device is listed in the Network Interfaces tab for the selected host.

Note

While the host’s configuration is being updated, an indication of this status appears as follows:

  • An Updating icon Updating appears below each network interface in the host’s Network Interfaces tab.
  • The status Networks updating appears:

    • In the host’s Status column in the Compute Hosts window.
    • In the host’s Status column in the Hosts tab that you access when selecting a cluster in the Compute > Cluster window.
    • In the host’s Network Device Status column in the Hosts tab that you access when selecting a network in the Network > Networks window.

Bonding must be enabled for the ports of the switch used by the host. The process by which bonding is enabled is slightly different for each switch; consult the manual provided by your switch vendor for detailed information on how to enable bonding.

Note

For a bond in Mode 4, all slaves must be configured properly on the switch. If none of them is configured properly on the switch, the ad_partner_mac is reported as 00:00:00:00:00:00. The Manager will display a warning in the form of an exclamation mark icon on the bond in the Network Interfaces tab. No warning is provided if any of the slaves are up and running.

6.5.3. Creating a Bond Device Automatically

Red Hat enables you to automate the bonding process for non-bonded NICs, for one or more clusters, or for the entire data center using the LLDP Labeler. The bond is created using bonding mode 4. For more information about bond modes, see Section 6.5.5, “Bonding Modes”.

Bonding Devices Automatically

By default, LLDP Labeler is defined to run as an hourly service. This option is useful if you make hardware changes, for example, making changes to NICs, switches, cables, or if you change switch configurations.

Prerequisites

  • The interfaces must be connected to a Juniper switch.
  • The Juniper switch must be configured for Link Aggregation Control Protocol (LACP) using LLDP.

Procedure

  1. Configure the Manager’s username and password by opening /etc/ovirt-lldp-labeler/conf.d/ovirt-lldp-credentials.conf in a text editor and updating the following values:

    • username - the username of the Manager’s administrator. The default is admin@internal.
    • password - the password of the Manager’s administrator. The default is 123456.
  2. Configure the LLDP Labeler service by opening etc/ovirt-lldp-labeler/conf.d/ovirt-lldp-credentials.conf in a text editor and updating the following values:

    • clusters - a comma-separated list of clusters on which the service should run. Wildcards are supported. For example, Cluster* defines LLDP Labeler to run on all clusters starting with word Cluster. To run the service on all clusters in the data center, type *. The default is Def*.
    • api_url - the full URL of the Manager’s API. The default is https://ovirt-engine/ovirt-engine/api
    • ca_file - the path to the custom certificate file. Leave this value empty if you do not use custom certificates. The default is empty.
    • auto_bonding - enables LLDP Labeler’s bonding capabilities. The default is true.
    • auto_labeling - enables LLDP Labeler’s labeling capabilities. The default is true.
  3. Optional. Configure the service to run at a different time interval, by editing etc/ovirt-lldp-labeler/conf.d/ovirt-lldp-labeler.timer in a text editor and changing the value of OnUnitActiveSec. The default is 1h.
  4. Enable the service by running:

    # systemctl enable --now ovirt-lldp-labeler
  5. Optional. To invoke the service manually, run:

    # /usr/bin/python /usr/share/ovirt-lldp-labeler/ovirt_lldp_labeler_cli.py

If the devices are incompatible, the NICs that violate these rules will not be bonded. For information about bonding logic, see Section 6.5.4, “Bonding Logic in Red Hat Virtualization”.

Your network devices are linked into a bond device and can be edited as a single interface. The bond device is listed in the Network Interfaces tab for the selected host. If the NICs were not already connected to logical networks, assign a logical network to the newly created bond device. See Section 6.4.2, “Editing Host Network Interfaces and Assigning Logical Networks to Hosts” for details.

6.5.4. Bonding Logic in Red Hat Virtualization

There are several distinct bond creation scenarios, each with its own logic.

Two factors that affect bonding logic are:

  • Does either device already carry logical networks?
  • Are the devices carrying compatible logical networks?
Note

If multiple logical networks are connected to a NIC, only one of the networks can be non-VLAN. All remaining logical networks must have unique VLANs.

In addition, the NICs must be connected to the same port on the switch.

Note

If your environment uses iSCSI storage and you want to implement redundancy, follow the instructions for configuring multipathing for iSCSI.

Table 6.6. Bonding Scenarios, Results, and Creation Method
Bonding ScenarioResultMethod

NIC + NIC

The Create New Bond window is displayed, and you can configure a new bond device.

If the network interfaces carry incompatible logical networks, the bonding operation fails until you detach incompatible logical networks from the devices forming your new bond.

Adminstration Portal or LLDP Labeler

NIC + Bond

The NIC is added to the bond device. Logical networks carried by the NIC and the bond are all added to the resultant bond device if they are compatible.

If the bond devices carry incompatible logical networks, the bonding operation fails until you detach incompatible logical networks from the devices forming your new bond.

Administration Portal

Bond + Bond

If the bond devices are not attached to logical networks, or are attached to compatible logical networks, a new bond device is created. It contains all of the network interfaces and carries all logical networks of the component bond devices. The Create New Bond window is displayed, allowing you to configure your new bond.

If the bond devices carry incompatible logical networks, the bonding operation fails until you detach incompatible logical networks from the devices forming your new bond.

Administration Portal

6.5.5. Bonding Modes

A bond is an aggregation of multiple network interface cards into a single software-defined device. Because bonded network interfaces combine the transmission capability of the network interface cards included in the bond to act as a single network interface, they can provide greater transmission speed than that of a single network interface card. Also, because all network interface cards in the bond must fail for the bond itself to fail, bonding provides increased fault tolerance. However, one limitation is that the network interface cards that form a bonded network interface must be of the same make and model to ensure that all network interface cards in the bond support the same options and modes.

The packet dispersal algorithm for a bond is determined by the bonding mode used.

Important

Modes 1, 2, 3, and 4 support both virtual machine (bridged) and non-virtual machine (bridgeless) network types. Modes 0, 5, and 6 support non-virtual machine (bridgeless) networks only.

Red Hat Virtualization uses Mode 4 by default, but supports the following common bonding modes:

Mode 0 (round-robin policy)
Transmits packets through network interface cards in sequential order. Packets are transmitted in a loop that begins with the first available network interface card in the bond and ends with the last available network interface card in the bond. All subsequent loops start with the first available network interface card. Mode 0 offers fault tolerance and balances the load across all network interface cards in the bond. However, Mode 0 cannot be used in conjunction with bridges. Therefore, it is not compatible with virtual machine logical networks.
Mode 1 (active-backup policy)
One network interface card is active, while all the other network interface cards are in a backup state. If the active network interface card fails, one of the backup network interface cards replaces that network interface card as the only active network interface card in the bond. The MAC address of this bond is visible only on the network adapter port in order to prevent confusion that might occur if the MAC address of the bond were to change, reflecting the MAC address of the new active network interface card. Mode 1 provides fault tolerance and is supported in Red Hat Virtualization.
Mode 2 (XOR policy)
Selects the network interface card through which to transmit packets using the result of the following operation: (XOR the source MAC address with the destination MAC address) modulo network interface card count. This calculation ensures that the same network interface card is selected for each destination MAC address. Mode 2 provides fault tolerance and load-balancing and is supported in Red Hat Virtualization.
Mode 3 (broadcast policy)
Transmits all packets to all network interface cards. Mode 3 provides fault tolerance and is supported in Red Hat Virtualization.
Mode 4 (dynamic link aggregation policy)
Creates aggregation groups in which the interfaces share the same speed and duplex settings. Mode 4 uses all network interface cards in the active aggregation group in accordance with the IEEE 802.3ad specification and is supported in Red Hat Virtualization.
Mode 5 (adaptive transmit load-balancing policy)
Ensures that the outward traffic is distributed, based on the load, over all the network interface cards in the bond and that the incoming traffic is received by the active network interface card. If the network interface card receiving incoming traffic fails, another network interface card is assigned. Mode 5 cannot be used in conjunction with bridges. Therefore, it is not compatible with virtual machine logical networks.
Mode 6 (adaptive load-balancing policy)
Combines Mode 5 (adaptive transmit load-balancing policy) with receive load-balancing for IPv4 traffic and has no special switch requirements. ARP negotiation is used for balancing the receive load. Mode 6 cannot be used in conjunction with bridges. Therefore, it is not compatible with virtual machine logical networks.

6.5.6. Example Uses of Custom Bonding Options with Host Interfaces

You can create customized bond devices by selecting Custom from the Bonding Mode of the Create New Bond window. The following examples should be adapted for your needs. For a comprehensive list of bonding options and their descriptions, see the Linux Ethernet Bonding Driver HOWTO on Kernel.org.

Example 6.1. xmit_hash_policy

This option defines the transmit load balancing policy for bonding modes 2 and 4. For example, if the majority of your traffic is between many different IP addresses, you may want to set a policy to balance by IP address. You can set this load-balancing policy by selecting a Custom bonding mode, and entering the following into the text field:

mode=4 xmit_hash_policy=layer2+3

Example 6.2. ARP Monitoring

ARP monitor is useful for systems which can’t or don’t report link-state properly via ethtool. Set an arp_interval on the bond device of the host by selecting a Custom bonding mode, and entering the following into the text field:

mode=1 arp_interval=1 arp_ip_target=192.168.0.2

Example 6.3. Primary

You may want to designate a NIC with higher throughput as the primary interface in a bond device. Designate which NIC is primary by selecting a Custom bonding mode, and entering the following into the text field:

mode=1 primary=eth0
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.