8.13. Configure teamd Runners
Runners are units of code which are compiled into the Team daemon when an instance of the daemon is created. For an introduction to the
teamd
runners, see Section 8.4, “Understanding the Network Teaming Daemon and the "Runners"”.
8.13.1. Configure the broadcast Runner
To configure the broadcast runner, using an editor as
Copy to Clipboard
Copied!
root
, add the following to the team JSON format configuration file:
{ "device": "team0", "runner": {"name": "broadcast"}, "ports": {"em1": {}, "em2": {}} }
{
"device": "team0",
"runner": {"name": "broadcast"},
"ports": {"em1": {}, "em2": {}}
}
Please see the
teamd.conf(5)
man page for more information.
8.13.2. Configure the random Runner
The random runner behaves similarly to the round-robin runner.
To configure the random runner, using an editor as
Copy to Clipboard
Copied!
root
, add the following to the team JSON format configuration file:
{ "device": "team0", "runner": {"name": "random"}, "ports": {"em1": {}, "em2": {}} }
{
"device": "team0",
"runner": {"name": "random"},
"ports": {"em1": {}, "em2": {}}
}
Please see the
teamd.conf(5)
man page for more information.
8.13.3. Configure the Round-robin Runner
To configure the round-robin runner, using an editor as
Copy to Clipboard
Copied!
A very basic configuration for round-robin.
root
, add the following to the team JSON format configuration file:
{ "device": "team0", "runner": {"name": "roundrobin"}, "ports": {"em1": {}, "em2": {}} }
{
"device": "team0",
"runner": {"name": "roundrobin"},
"ports": {"em1": {}, "em2": {}}
}
Please see the
teamd.conf(5)
man page for more information.
8.13.4. Configure the activebackup Runner
The active backup runner can use all of the link-watchers to determine the status of links in a team. Any one of the following examples can be added to the team JSON format configuration file:
{ "device": "team0", "runner": { "name": "activebackup" }, "link_watch": { "name": "ethtool" }, "ports": { "em1": { "prio": -10, "sticky": true }, "em2": { "prio": 100 } } }
{
"device": "team0",
"runner": {
"name": "activebackup"
},
"link_watch": {
"name": "ethtool"
},
"ports": {
"em1": {
"prio": -10,
"sticky": true
},
"em2": {
"prio": 100
}
}
}
{ "device": "team0", "runner": { "name": "activebackup" }, "link_watch": { "name": "ethtool" }, "ports": { "em1": { "prio": -10, "sticky": true, "queue_id": 4 }, "em2": { "prio": 100 } } }
{
"device": "team0",
"runner": {
"name": "activebackup"
},
"link_watch": {
"name": "ethtool"
},
"ports": {
"em1": {
"prio": -10,
"sticky": true,
"queue_id": 4
},
"em2": {
"prio": 100
}
}
}
4
. It uses active-backup runner with ethtool as the link watcher. Port em2 has higher priority. But the sticky flag ensures that if em1 becomes active, it will stay active as long as the link remains up.
To configure the activebackup runner using ethtool as the link watcher and applying a delay, using an editor as
Copy to Clipboard
Copied!
This example configuration uses the active-backup runner with ethtool as the link watcher. Port em2 has higher priority. But the sticky flag ensures that if em1 becomes active, it stays active while the link remains up. Link changes are not propagated to the runner immediately, but delays are applied.
root
, add the following to the team JSON format configuration file:
{ "device": "team0", "runner": { "name": "activebackup" }, "link_watch": { "name": "ethtool", "delay_up": 2500, "delay_down": 1000 }, "ports": { "em1": { "prio": -10, "sticky": true }, "em2": { "prio": 100 } } }
{
"device": "team0",
"runner": {
"name": "activebackup"
},
"link_watch": {
"name": "ethtool",
"delay_up": 2500,
"delay_down": 1000
},
"ports": {
"em1": {
"prio": -10,
"sticky": true
},
"em2": {
"prio": 100
}
}
}
Please see the
teamd.conf(5)
man page for more information.
8.13.5. Configure the loadbalance Runner
This runner can be used for two types of load balancing, active and passive. In active mode, constant re-balancing of traffic is done by using statistics of recent traffic to share out traffic as evenly as possible. In passive mode, streams of traffic are distributed randomly across the available links. This has a speed advantage due to lower processing overhead. In high volume traffic applications this is often preferred as traffic usually consists of multiple stream which will be distributed randomly between the available links, in this way load sharing is accomplished without intervention by
teamd
.
To configure the loadbalance runner for passive transmit (Tx) load balancing, using an editor as
Copy to Clipboard
Copied!
Configuration for hash-based passive transmit (Tx) load balancing.
root
, add the following to the team JSON format configuration file:
{ "device": "team0", "runner": { "name": "loadbalance", "tx_hash": ["eth", "ipv4", "ipv6"] }, "ports": {"em1": {}, "em2": {}} }
{
"device": "team0",
"runner": {
"name": "loadbalance",
"tx_hash": ["eth", "ipv4", "ipv6"]
},
"ports": {"em1": {}, "em2": {}}
}
To configure the loadbalance runner for active transmit (Tx) load balancing, using an editor as
Copy to Clipboard
Copied!
Configuration for active transmit (Tx) load balancing using basic load balancer.
root
, add the following to the team JSON format configuration file:
{ "device": "team0", "runner": { "name": "loadbalance", "tx_hash": ["eth", "ipv4", "ipv6"], "tx_balancer": { "name": "basic" } }, "ports": {"em1": {}, "em2": {}} }
{
"device": "team0",
"runner": {
"name": "loadbalance",
"tx_hash": ["eth", "ipv4", "ipv6"],
"tx_balancer": {
"name": "basic"
}
},
"ports": {"em1": {}, "em2": {}}
}
Please see the
teamd.conf(5)
man page for more information.
8.13.6. Configure the LACP (802.3ad) Runner
To configure the LACP runner using ethtool as a link watcher, using an editor as
Copy to Clipboard
Copied!
Configuration for connection to a link aggregation control protocol (LACP) capable counterpart. The LACP runner should use ethtool to monitor the status of a link. Note that only ethtool can be used for link monitoring because, for example in the case of arp_ping, the link would never come up. The reason is that the link has to be established first and only after that can packets, ARP included, go through. Using ethtool prevents this because it monitors each link layer individually.
root
, add the following to the team JSON format configuration file:
{ "device": "team0", "runner": { "name": "lacp", "active": true, "fast_rate": true, "tx_hash": ["eth", "ipv4", "ipv6"] }, "link_watch": {"name": "ethtool"}, "ports": {"em1": {}, "em2": {}} }
{
"device": "team0",
"runner": {
"name": "lacp",
"active": true,
"fast_rate": true,
"tx_hash": ["eth", "ipv4", "ipv6"]
},
"link_watch": {"name": "ethtool"},
"ports": {"em1": {}, "em2": {}}
}
Active load balancing is possible with this runner in the same way as it is done for the loadbalance runner. To enable active transmit (Tx) load balancing, add the following section:
Copy to Clipboard
Copied!
"tx_balancer": { "name": "basic" }
"tx_balancer": {
"name": "basic"
}
Please see the
teamd.conf(5)
man page for more information.
8.13.7. Configure Monitoring of the Link State
The following methods of link state monitoring are available. To implement one of the methods, add the JSON format string to the team JSON format configuration file using an editor running with
root
privileges.
8.13.7.1. Configure Ethtool for link-state Monitoring
To add or edit an existing delay, in milliseconds, between the link coming up and the runner being notified about it, add or edit a section as follows:
Copy to Clipboard
Copied!
"link_watch": { "name": "ethtool", "delay_up": 2500 }
"link_watch": {
"name": "ethtool",
"delay_up": 2500
}
To add or edit an existing delay, in milliseconds, between the link going down and the runner being notified about it, add or edit a section as follows:
Copy to Clipboard
Copied!
"link_watch": { "name": "ethtool", "delay_down": 1000 }
"link_watch": {
"name": "ethtool",
"delay_down": 1000
}
8.13.7.2. Configure ARP Ping for Link-state Monitoring
The team daemon
teamd
sends an ARP REQUEST to an address at the remote end of the link in order to determine if the link is up. The method used is the same as the arping utility but it does not use that utility.
Prepare a file containing the new configuration in JSON format similar to the following example:
Copy to Clipboard
Copied!
This configuration uses arp_ping as the link watcher. The
{ "device": "team0", "runner": {"name": "activebackup"}, "link_watch":{ "name": "arp_ping", "interval": 100, "missed_max": 30, "source_host": "192.168.23.2", "target_host": "192.168.23.1" }, "ports": { "em1": { "prio": -10, "sticky": true }, "em2": { "prio": 100 } } }
{
"device": "team0",
"runner": {"name": "activebackup"},
"link_watch":{
"name": "arp_ping",
"interval": 100,
"missed_max": 30,
"source_host": "192.168.23.2",
"target_host": "192.168.23.1"
},
"ports": {
"em1": {
"prio": -10,
"sticky": true
},
"em2": {
"prio": 100
}
}
}
missed_max
option is a limit value of the maximum allowed number of missed replies (ARP replies for example). It should be chosen in conjunction with the interval
option in order to determine the total time before a link is reported as down.
To load a new configuration for a team port em2, from a file containing a JSON configuration, issue the following command as
Copy to Clipboard
Copied!
Note that the old configuration will be overwritten and that any options omitted will be reset to the default values. See the
root
:
teamdctl port config update em2 JSON-config-file
~]# teamdctl port config update em2 JSON-config-file
teamdctl(8)
man page for more team daemon control tool command examples.
8.13.7.3. Configure IPv6 NA/NS for Link-state Monitoring
{ "device": "team0", "runner": {"name": "activebackup"}, "link_watch": { "name": "nsna_ping", "interval": 200, "missed_max": 15, "target_host": "fe80::210:18ff:feaa:bbcc" }, "ports": { "em1": { "prio": -10, "sticky": true }, "em2": { "prio": 100 } } }
{
"device": "team0",
"runner": {"name": "activebackup"},
"link_watch": {
"name": "nsna_ping",
"interval": 200,
"missed_max": 15,
"target_host": "fe80::210:18ff:feaa:bbcc"
},
"ports": {
"em1": {
"prio": -10,
"sticky": true
},
"em2": {
"prio": 100
}
}
}
To configure the interval between sending NS/NA packets, add or edit a section as follows:
Copy to Clipboard
Copied!
Value is positive number in milliseconds. It should be chosen in conjunction with the
"link_watch": { "name": "nsna_ping", "interval": 200 }
"link_watch": {
"name": "nsna_ping",
"interval": 200
}
missed_max
option in order to determine the total time before a link is reported as down.
To configure the maximum number of missed NS/NA reply packets to allow before reporting the link as down, add or edit a section as follows:
Copy to Clipboard
Copied!
Maximum number of missed NS/NA reply packets. If this number is exceeded, the link is reported as down. The
"link_watch": { "name": "nsna_ping", "missed_max": 15 }
"link_watch": {
"name": "nsna_ping",
"missed_max": 15
}
missed_max
option is a limit value of the maximum allowed number of missed replies (ARP replies for example). It should be chosen in conjunction with the interval
option in order to determine the total time before a link is reported as down.
To configure the host name that is resolved to the
Copy to Clipboard
Copied!
The “target_host” option contains the host name to be converted to an
IPv6
address target address for the NS/NA packets, add or edit a section as follows:
"link_watch": { "name": "nsna_ping", "target_host": "MyStorage" }
"link_watch": {
"name": "nsna_ping",
"target_host": "MyStorage"
}
IPv6
address which will be used as the target address for the NS/NA packets. An IPv6
address can be used in place of a host name.
Please see the
teamd.conf(5)
man page for more information.
8.13.8. Configure Port Selection Override
The physical port which transmits a frame is normally selected by the kernel part of the team driver, and is not relevant to the user or system administrator. The output port is selected using the policies of the selected team mode (
teamd
runner). On occasion however, it is helpful to direct certain classes of outgoing traffic to certain physical interfaces to implement slightly more complex policies. By default the team driver is multiqueue aware and 16 queues are created when the driver initializes. If more or less queues are required, the Netlink attribute tx_queues
can be used to change this value during the team driver instance creation.
The queue ID for a port can be set by the port configuration option
Copy to Clipboard
Copied!
These queue ID's can be used in conjunction with the tc utility to configure a multiqueue queue discipline and filters to bias certain traffic to be transmitted on certain port devices. For example, if using the above configuration and wanting to force all traffic bound to
Copy to Clipboard
Copied!
This mechanism of overriding runner selection logic in order to bind traffic to a specific port can be used with all runners.
queue_id
as follows:
{ "queue_id": 3 }
{
"queue_id": 3
}
192.168.1.100
to use enp1s0 in the team as its output device, issue commands as root
in the following format:
tc qdisc add dev team0 handle 1 root multiq tc filter add dev team0 protocol ip parent 1: prio 1 u32 match ip dst \ 192.168.1.100 action skbedit queue_mapping 3
~]# tc qdisc add dev team0 handle 1 root multiq
~]# tc filter add dev team0 protocol ip parent 1: prio 1 u32 match ip dst \
192.168.1.100 action skbedit queue_mapping 3
8.13.9. Configure BPF-based Tx Port Selectors
The loadbalance and LACP runners uses hashes of packets to sort network traffic flow. The hash computation mechanism is based on the Berkeley Packet Filter (BPF) code. The BPF code is used to generate a hash rather than make a policy decision for outgoing packets. The hash length is 8 bits giving 256 variants. This means many different socket buffers (SKB) can have the same hash and therefore pass traffic over the same link. The use of a short hash is a quick way to sort traffic into different streams for the purposes of load balancing across multiple links. In static mode, the hash is only used to decide out of which port the traffic should be sent. In active mode, the runner will continually reassign hashes to different ports in an attempt to reach a perfect balance.
The following fragment types or strings can be used for packet Tx hash computation:
eth
— Uses source and destination MAC addresses.vlan
— Uses VLAN ID.ipv4
— Uses source and destinationIPv4
addresses.ipv6
— Uses source and destinationIPv6
addresses.ip
— Uses source and destinationIPv4
andIPv6
addresses.l3
— Uses source and destinationIPv4
andIPv6
addresses.tcp
— Uses source and destinationTCP
ports.udp
— Uses source and destinationUDP
ports.sctp
— Uses source and destinationSCTP
ports.l4
— Uses source and destinationTCP
andUDP
andSCTP
ports.
These strings can be used by adding a line in the following format to the load balance runner:
Copy to Clipboard
Copied!
See Section 8.13.5, “Configure the loadbalance Runner” for an example.
"tx_hash": ["eth", "ipv4", "ipv6"]
"tx_hash": ["eth", "ipv4", "ipv6"]