9.5. Network Bonding
Network bonding combines multiple NICs into a bond device, with the following advantages:
- The transmission speed of bonded NICs is greater than that of a single NIC.
- Network bonding provides fault tolerance, because the bond device will not fail unless all its NICs fail.
Using NICs of the same make and model ensures that they support the same bonding options and modes.
Red Hat Virtualization’s default bonding mode, (Mode 4) Dynamic Link Aggregation
, requires a switch that supports 802.3ad.
The logical networks of a bond must be compatible. A bond can support only 1 non-VLAN logical network. The rest of the logical networks must have unique VLAN IDs.
Bonding must be enabled for the switch ports. Consult the manual provided by your switch vendor for specific instructions.
You can create a network bond device using one of the following methods:
- Manually, in the Administration Portal, for a specific host
- Automatically, using LLDP Labeler, for unbonded NICs of all hosts in a cluster or data center
If your environment uses iSCSI storage and you want to implement redundancy, follow the instructions for configuring iSCSI multipathing.
9.5.1. Creating a Bond Device in the Administration Portal
You can create a bond device on a specific host in the Administration Portal. The bond device can carry both VLAN-tagged and untagged traffic.
Procedure
-
Click
. - Click the host’s name to open the details view.
- Click the Network Interfaces tab to list the physical network interfaces attached to the host.
- Click Setup Host Networks.
- Check the switch configuration. If the switch has been configured to provide Link Layer Discovery Protocol (LLDP) information, hover your cursor over a physical NIC to view the switch port’s aggregation configuration.
Drag and drop a NIC onto another NIC or onto a bond.
NoteTwo NICs form a new bond. A NIC and a bond adds the NIC to the existing bond.
If the logical networks are incompatible, the bonding operation is blocked.
Select the Bond Name and Bonding Mode from the drop-down menus. See Section 9.5.3, “Bonding Modes” for details.
If you select the Custom bonding mode, you can enter bonding options in the text field, as in the following examples:
-
If your environment does not report link states with
ethtool
, you can set ARP monitoring by enteringmode=1 arp_interval=1 arp_ip_target=192.168.0.2
. You can designate a NIC with higher throughput as the primary interface by entering
mode=1 primary=eth0
.For a comprehensive list of bonding options and their descriptions, see the Linux Ethernet Bonding Driver HOWTO on Kernel.org.
-
If your environment does not report link states with
- Click OK.
Attach a logical network to the new bond and configure it. See Section 9.4.2, “Editing Host Network Interfaces and Assigning Logical Networks to Hosts” for instructions.
NoteYou cannot attach a logical network directly to an individual NIC in the bond.
- Optionally, you can select Verify connectivity between Host and Engine if the host is in maintenance mode.
- Click OK.
9.5.2. Creating a Bond Device with the LLDP Labeler Service
The LLDP Labeler service enables you to create a bond device automatically with all unbonded NICs, for all the hosts in one or more clusters or in the entire data center. The bonding mode is (Mode 4) Dynamic Link Aggregation(802.3ad)
.
NICs with incompatible logical networks cannot be bonded.
By default, LLDP Labeler runs as an hourly service. This option is useful if you make hardware changes (for example, NICs, switches, or cables) or change switch configurations.
Prerequisites
- The interfaces must be connected to a Juniper switch.
- The Juniper switch must be configured for Link Aggregation Control Protocol (LACP) using LLDP.
Procedure
Configure the
username
andpassword
in/etc/ovirt-lldp-labeler/conf.d/ovirt-lldp-credentials.conf
:-
username
- the username of the Manager administrator. The default isadmin@internal
. -
password
- the password of the Manager administrator. The default is123456
.
-
Configure the LLDP Labeler service by updating the following values in
etc/ovirt-lldp-labeler/conf.d/ovirt-lldp-credentials.conf
:-
clusters
- a comma-separated list of clusters on which the service should run. Wildcards are supported. For example,Cluster*
defines LLDP Labeler to run on all clusters starting with wordCluster
. To run the service on all clusters in the data center, type*
. The default isDef*
. -
api_url
- the full URL of the Manager’s API. The default ishttps://Manager_FQDN/ovirt-engine/api
-
ca_file
- the path to the custom CA certificate file. Leave this value empty if you do not use custom certificates. The default is empty. -
auto_bonding
- enables LLDP Labeler’s bonding capabilities. The default istrue
. -
auto_labeling
- enables LLDP Labeler’s labeling capabilities. The default istrue
.
-
-
Optionally, you can configure the service to run at a different time interval by changing the value of
OnUnitActiveSec
inetc/ovirt-lldp-labeler/conf.d/ovirt-lldp-labeler.timer
. The default is1h
. Configure the service to start now and at boot by entering the following command:
# systemctl enable --now ovirt-lldp-labeler
To invoke the service manually, enter the following command:
# /usr/bin/python /usr/share/ovirt-lldp-labeler/ovirt_lldp_labeler_cli.py
Attach a logical network to the new bond and configure it. See Section 9.4.2, “Editing Host Network Interfaces and Assigning Logical Networks to Hosts” for instructions.
NoteYou cannot attach a logical network directly to an individual NIC in the bond.
9.5.3. Bonding Modes
The packet dispersal algorithm is determined by the bonding mode. (See the Linux Ethernet Bonding Driver HOWTO for details). Red Hat Virtualization’s default bonding mode is (Mode 4) Dynamic Link Aggregation(802.3ad)
.
Red Hat Virtualization supports the following bonding modes, because they can be used in virtual machine (bridged) networks:
(Mode 1) Active-Backup
- One NIC is active. If the active NIC fails, one of the backup NICs replaces it as the only active NIC in the bond. The MAC address of this bond is visible only on the network adapter port. This prevents MAC address confusion that might occur if the MAC address of the bond were to change, reflecting the MAC address of the new active NIC.
(Mode 2) Load Balance (balance-xor)
-
The NIC that transmits packets is selected by performing an XOR operation on the source MAC address and the destination MAC address, multiplied by the
modulo
of the total number of NICs. This algorithm ensures that the same NIC is selected for each destination MAC address. (Mode 3) Broadcast
- Packets are transmitted to all NICs.
(Mode 4) Dynamic Link Aggregation(802.3ad)
(Default)The NICs are aggregated into groups that share the same speed and duplex settings . All the NICs in the active aggregation group are used.
Note(Mode 4) Dynamic Link Aggregation(802.3ad)
requires a switch that supports 802.3ad.The bonded NICs must have the same aggregator IDs. Otherwise, the Manager displays a warning exclamation mark icon on the bond in the Network Interfaces tab and the
ad_partner_mac
value of the bond is reported as00:00:00:00:00:00
. You can check the aggregator IDs by entering the following command:# cat /proc/net/bonding/bond0
Red Hat Virtualization does not support the following bonding modes, because they cannot be used in bridged networks and are, therefore, incompatible with virtual machine logical networks:
(Mode 0) Round-Robin
- The NICs transmit packets in sequential order. Packets are transmitted in a loop that begins with the first available NIC in the bond and ends with the last available NIC in the bond. Subsequent loops start with the first available NIC.
(Mode 5) Balance-TLB
, also called Transmit Load-Balance- Outgoing traffic is distributed, based on the load, over all the NICs in the bond. Incoming traffic is received by the active NIC. If the NIC receiving incoming traffic fails, another NIC is assigned.
(Mode 6) Balance-ALB
, also called Adaptive Load-Balance-
(Mode 5) Balance-TLB
is combined with receive load-balancing for IPv4 traffic. ARP negotiation is used for balancing the receive load.