Ce contenu n'est pas disponible dans la langue sélectionnée.
Load Balancer Administration
Configuring Keepalived and HAProxy
Abstract
Chapter 1. Load Balancer Overview Copier lienLien copié sur presse-papiers!
1.1. keepalived Copier lienLien copié sur presse-papiers!
keepalived daemon runs on both the active and passive LVS routers. All routers running keepalived use the Virtual Redundancy Routing Protocol (VRRP). The active router sends VRRP advertisements at periodic intervals; if the backup routers fail to receive these advertisements, a new active router is elected.
keepalived can also perform load balancing tasks for real servers.
systemctl command, which reads the configuration file /etc/keepalived/keepalived.conf. On the active router, the keepalived daemon starts the LVS service and monitors the health of the services based on the configured topology. Using VRRP, the active router sends periodic advertisements to the backup routers. On the backup routers, the VRRP instance determines the running status of the active router. If the active router fails to advertise after a user-configurable interval, Keepalived initiates failover. During failover, the virtual servers are cleared. The new active router takes control of the virtual IP address (VIP), sends out an ARP message, sets up IPVS table entries (virtual servers), begins health checks, and starts sending VRRP advertisements.
keepalived detects that the server has failed and removes it from the server pool.
1.2. haproxy Copier lienLien copié sur presse-papiers!
haproxy is able to process several events on thousands of connections across a pool of multiple real servers acting as one virtual server. The scheduler determines the volume of connections and either assigns them equally in non-weighted schedules or given higher connection volume to servers that can handle higher capacity in weighted algorithms.
1.3. keepalived and haproxy Copier lienLien copié sur presse-papiers!
Chapter 2. Keepalived Overview Copier lienLien copié sur presse-papiers!
- To balance the load across the real servers.
- To check the integrity of the services on each real server.
Note
Aug 3 17:07:19 hostname Keepalived_vrrp[123]: receive an invalid ip number count associated with VRID! Aug 3 17:07:19 hostname Keepalived_vrrp[123]: bogus VRRP packet received on em2 !!! Aug 3 17:07:19 hostname Keepalived_vrrp[123]: VRRP_Instance(vrrp_ipv6) ignoring received advertisment...
Aug 3 17:07:19 hostname Keepalived_vrrp[123]: receive an invalid ip number count associated with VRID!
Aug 3 17:07:19 hostname Keepalived_vrrp[123]: bogus VRRP packet received on em2 !!!
Aug 3 17:07:19 hostname Keepalived_vrrp[123]: VRRP_Instance(vrrp_ipv6) ignoring received advertisment...
2.1. A Basic Keepalived Load Balancer Configuration Copier lienLien copié sur presse-papiers!
Figure 2.1. A Basic Load Balancer Configuration
eth0 is connected to the Internet, then multiple virtual servers can be assigned to eth0. Alternatively, each virtual server can be associated with a separate device per service. For example, HTTP traffic can be handled on eth0 at 192.168.1.111 while FTP traffic can be handled on eth0 at 192.168.1.222.
2.2. A Three-Tier keepalived Load Balancer Configuration Copier lienLien copié sur presse-papiers!
Figure 2.2. A Three-Tier Load Balancer Configuration
2.3. keepalived Scheduling Overview Copier lienLien copié sur presse-papiers!
2.3.1. Keepalived Scheduling Algorithms Copier lienLien copié sur presse-papiers!
- Round-Robin Scheduling
- Distributes each request sequentially around the pool of real servers. Using this algorithm, all the real servers are treated as equals without regard to capacity or load. This scheduling model resembles round-robin DNS but is more granular due to the fact that it is network-connection based and not host-based. Load Balancer round-robin scheduling also does not suffer the imbalances caused by cached DNS queries.
- Weighted Round-Robin Scheduling
- Distributes each request sequentially around the pool of real servers but gives more jobs to servers with greater capacity. Capacity is indicated by a user-assigned weight factor, which is then adjusted upward or downward by dynamic load information.Weighted round-robin scheduling is a preferred choice if there are significant differences in the capacity of real servers in the pool. However, if the request load varies dramatically, the more heavily weighted server may answer more than its share of requests.
- Least-Connection
- Distributes more requests to real servers with fewer active connections. Because it keeps track of live connections to the real servers through the IPVS table, least-connection is a type of dynamic scheduling algorithm, making it a better choice if there is a high degree of variation in the request load. It is best suited for a real server pool where each member node has roughly the same capacity. If a group of servers have different capabilities, weighted least-connection scheduling is a better choice.
- Weighted Least-Connections
- Distributes more requests to servers with fewer active connections relative to their capacities. Capacity is indicated by a user-assigned weight, which is then adjusted upward or downward by dynamic load information. The addition of weighting makes this algorithm ideal when the real server pool contains hardware of varying capacity.
- Locality-Based Least-Connection Scheduling
- Distributes more requests to servers with fewer active connections relative to their destination IPs. This algorithm is designed for use in a proxy-cache server cluster. It routes the packets for an IP address to the server for that address unless that server is above its capacity and has a server in its half load, in which case it assigns the IP address to the least loaded real server.
- Locality-Based Least-Connection Scheduling with Replication Scheduling
- Distributes more requests to servers with fewer active connections relative to their destination IPs. This algorithm is also designed for use in a proxy-cache server cluster. It differs from Locality-Based Least-Connection Scheduling by mapping the target IP address to a subset of real server nodes. Requests are then routed to the server in this subset with the lowest number of connections. If all the nodes for the destination IP are above capacity, it replicates a new server for that destination IP address by adding the real server with the least connections from the overall pool of real servers to the subset of real servers for that destination IP. The most loaded node is then dropped from the real server subset to prevent over-replication.
- Destination Hash Scheduling
- Distributes requests to the pool of real servers by looking up the destination IP in a static hash table. This algorithm is designed for use in a proxy-cache server cluster.
- Source Hash Scheduling
- Distributes requests to the pool of real servers by looking up the source IP in a static hash table. This algorithm is designed for LVS routers with multiple firewalls.
- Shortest Expected Delay
- Distributes connection requests to the server that has the shortest delay expected based on number of connections on a given server divided by its assigned weight.
- Never Queue
- A two-pronged scheduler that first finds and sends connection requests to a server that is idling, or has no connections. If there are no idling servers, the scheduler defaults to the server that has the least delay in the same manner as Shortest Expected Delay.
2.3.2. Server Weight and Scheduling Copier lienLien copié sur presse-papiers!
2.4. Routing Methods Copier lienLien copié sur presse-papiers!
2.4.1. NAT Routing Copier lienLien copié sur presse-papiers!
Figure 2.3. Load Balancer Implemented with NAT Routing
ipvs modules utilize their own internal NAT routines that are independent of iptables and ip6tables NAT. This will facilitate both IPv4 and IPv6 NAT when the real server is configured for NAT as opposed to DR in the /etc/keepalived/keepalived.conf file.
2.4.2. Direct Routing Copier lienLien copié sur presse-papiers!
Figure 2.4. Load Balancer Implemented with Direct Routing
2.4.2.1. Direct Routing and the ARP Limitation Copier lienLien copié sur presse-papiers!
arptables utility and IP packets can be filtered using iptables or firewalld. The two approaches differ as follows:
- The ARP filtering method blocks requests reaching the real servers. This prevents ARP from associating VIPs with real servers, leaving the active virtual server to respond with a MAC addresses.
- The IP packet filtering method permits routing packets to real servers with other IP addresses. This completely sidesteps the ARP problem by not configuring VIPs on real servers in the first place.
2.5. Persistence and Firewall Marks with Keepalived Copier lienLien copié sur presse-papiers!
2.5.1. Persistence Copier lienLien copié sur presse-papiers!
2.5.2. Firewall Marks Copier lienLien copié sur presse-papiers!
Chapter 3. Setting Up Load Balancer Prerequisites for Keepalived Copier lienLien copié sur presse-papiers!
keepalived consists of two basic groups: the LVS routers and the real servers. To prevent a single point of failure, each group should have at least two members.
3.1. The NAT Load Balancer Network Copier lienLien copié sur presse-papiers!
- Network Layout
- The topology for Load Balancer using NAT routing is the easiest to configure from a network layout perspective because only one access point to the public network is needed. The real servers are on a private network and respond to all requests through the LVS router.
- Hardware
- In a NAT topology, each real server only needs one NIC since it will only be responding to the LVS router. The LVS routers, on the other hand, need two NICs each to route traffic between the two networks. Because this topology creates a network bottleneck at the LVS router, Gigabit Ethernet NICs can be employed on each LVS router to increase the bandwidth the LVS routers can handle. If Gigabit Ethernet is employed on the LVS routers, any switch connecting the real servers to the LVS routers must have at least two Gigabit Ethernet ports to handle the load efficiently.
- Software
- Because the NAT topology requires the use of
iptablesfor some configurations, there can be a large amount of software configuration outside of Keepalived. In particular, FTP services and the use of firewall marks requires extra manual configuration of the LVS routers to route requests properly.
3.1.1. Configuring Network Interfaces for Load Balancer with NAT Copier lienLien copié sur presse-papiers!
eth0) will be on the 203.0.113.0/24 network and the private interfaces which link to the real servers (eth1) will be on the 10.11.12.0/24 network.
Important
NetworkManager service is not compatible with Load Balancer. In particular, IPv6 VIPs are known not to work when the IPv6 addresses are assigned by SLAAC. For this reason, the examples shown here use configuration files and the network service.
/etc/sysconfig/network-scripts/ifcfg-eth0, could look something like this:
/etc/sysconfig/network-scripts/ifcfg-eth1, for the private NAT interface on the LVS router could look something like this:
virtual_ipaddress option in the /etc/keepalived/keepalived.conf file. For more information, see Section 4.1, “A Basic Keepalived configuration”.
Important
Important
3.1.2. Routing on the Real Servers Copier lienLien copié sur presse-papiers!
Note
/etc/sysconfig/network-scripts/ifcfg-eth0, file could look similar to this:
Warning
GATEWAY= line, the first one to come up will get the gateway. Therefore if both eth0 and eth1 are configured and eth1 is used for Load Balancer, the real servers may not route requests properly.
ONBOOT=no in their network configuration files within the /etc/sysconfig/network-scripts/ directory or by making sure the gateway is correctly set in the interface which comes up first.
3.1.3. Enabling NAT Routing on the LVS Routers Copier lienLien copié sur presse-papiers!
keepalived to configure IP information.
Warning
eth0 or eth1 by manually editing network configuration files or using a network configuration tool. Instead, configure them by means of the keepalived.conf file.
keepalived service. Once it is up and running, the active LVS router will begin routing requests to the pool of real servers.
3.2. Load Balancer Using Direct Routing Copier lienLien copié sur presse-papiers!
- Network Layout
- In a direct routing Load Balancer setup, the LVS router needs to receive incoming requests and route them to the proper real server for processing. The real servers then need to directly route the response to the client. So, for example, if the client is on the Internet, and sends the packet through the LVS router to a real server, the real server must be able to connect directly to the client through the Internet. This can be done by configuring a gateway for the real server to pass packets to the Internet. Each real server in the server pool can have its own separate gateway (and each gateway with its own connection to the Internet), allowing for maximum throughput and scalability. For typical Load Balancer setups, however, the real servers can communicate through one gateway (and therefore one network connection).
- Hardware
- The hardware requirements of a Load Balancer system using direct routing is similar to other Load Balancer topologies. While the LVS router needs to be running Red Hat Enterprise Linux to process the incoming requests and perform load-balancing for the real servers, the real servers do not need to be Linux machines to function correctly. The LVS routers need one or two NICs each (depending on if there is a backup router). You can use two NICs for ease of configuration and to distinctly separate traffic; incoming requests are handled by one NIC and routed packets to real servers on the other.Since the real servers bypass the LVS router and send outgoing packets directly to a client, a gateway to the Internet is required. For maximum performance and availability, each real server can be connected to its own separate gateway which has its own dedicated connection to the network to which the client is connected (such as the Internet or an intranet).
- Software
- There is some configuration outside of keepalived that needs to be done, especially for administrators facing ARP issues when using Load Balancer by means of direct routing. Refer to Section 3.2.1, “Direct Routing Using arptables” or Section 3.2.3, “Direct Routing Using iptables” for more information.
3.2.1. Direct Routing Using arptables Copier lienLien copié sur presse-papiers!
arptables, each real server must have their virtual IP address configured, so they can directly route packets. ARP requests for the VIP are ignored entirely by the real servers, and any ARP packets that might otherwise be sent containing the VIPs are mangled to contain the real server's IP instead of the VIPs.
arptables method, applications may bind to each individual VIP or port that the real server is servicing. For example, the arptables method allows multiple instances of Apache HTTP Server to be running and bound explicitly to different VIPs on the system.
arptables method, VIPs cannot be configured to start on boot using standard Red Hat Enterprise Linux system configuration tools.
- Create the ARP table entries for each virtual IP address on each real server (the real_ip is the IP the director uses to communicate with the real server; often this is the IP bound to
eth0):arptables -A IN -d <virtual_ip> -j DROP arptables -A OUT -s <virtual_ip> -j mangle --mangle-ip-s <real_ip>
arptables -A IN -d <virtual_ip> -j DROP arptables -A OUT -s <virtual_ip> -j mangle --mangle-ip-s <real_ip>Copy to Clipboard Copied! Toggle word wrap Toggle overflow This will cause the real servers to ignore all ARP requests for the virtual IP addresses, and change any outgoing ARP responses which might otherwise contain the virtual IP so that they contain the real IP of the server instead. The only node that should respond to ARP requests for any of the VIPs is the current active LVS node. - Once this has been completed on each real server, save the ARP table entries by typing the following commands on each real server:
arptables-save > /etc/sysconfig/arptablessystemctl enable arptables.serviceThesystemctl enablecommand will cause the system to reload the arptables configuration on bootup before the network is started. - Configure the virtual IP address on all real servers using
ip addrto create an IP alias. For example:ip addr add 192.168.76.24 dev eth0
# ip addr add 192.168.76.24 dev eth0Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Configure Keepalived for Direct Routing. This can be done by adding
lb_kind DRto thekeepalived.conffile. Refer to Chapter 4, Initial Load Balancer Configuration with Keepalived for more information.
3.2.2. Direct Routing Using firewalld Copier lienLien copié sur presse-papiers!
firewalld. To configure direct routing using firewalld, you must add rules that create a transparent proxy so that a real server will service packets sent to the VIP address, even though the VIP address does not exist on the system.
firewalld method is simpler to configure than the arptables method. This method also circumvents the LVS ARP issue entirely, because the virtual IP address or addresses exist only on the active LVS director.
firewalld method compared to arptables, as there is overhead in forwarding every return packet.
firewalld method. For example, it is not possible to run two separate Apache HTTP Server services bound to port 80, because both must bind to INADDR_ANY instead of the virtual IP addresses.
firewalld method, perform the following steps on every real server:
- Ensure that
firewalldis running.systemctl start firewalld
# systemctl start firewalldCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure thatfirewalldis enabled to start at system start.systemctl enable firewalld
# systemctl enable firewalldCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Enter the following command for every VIP, port, and protocol (TCP or UDP) combination intended to be serviced for the real server. This command will cause the real servers to process packets destined for the VIP and port that they are given.
firewall-cmd --permanent --direct --add-rule ipv4 nat PREROUTING 0 -d vip -p tcp|udp -m tcp|udp --dport port -j REDIRECT
# firewall-cmd --permanent --direct --add-rule ipv4 nat PREROUTING 0 -d vip -p tcp|udp -m tcp|udp --dport port -j REDIRECTCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Reload the firewall rules and keep the state information.
firewall-cmd --reload
# firewall-cmd --reloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow The current permanent configuration will become the newfirewalldruntime configuration as well as the configuration at the next system start.
3.2.3. Direct Routing Using iptables Copier lienLien copié sur presse-papiers!
iptables firewall rules. To configure direct routing using iptables, you must add rules that create a transparent proxy so that a real server will service packets sent to the VIP address, even though the VIP address does not exist on the system.
iptables method is simpler to configure than the arptables method. This method also circumvents the LVS ARP issue entirely, because the virtual IP address(es) only exist on the active LVS director.
iptables method compared to arptables, as there is overhead in forwarding/masquerading every packet.
iptables method. For example, it is not possible to run two separate Apache HTTP Server services bound to port 80, because both must bind to INADDR_ANY instead of the virtual IP addresses.
iptables method, perform the following steps:
- On each real server, enter the following command for every VIP, port, and protocol (TCP or UDP) combination intended to be serviced for the real server:
iptables -t nat -A PREROUTING -p <tcp|udp> -d <vip> --dport <port> -j REDIRECTThis command will cause the real servers to process packets destined for the VIP and port that they are given. - Save the configuration on each real server:
iptables-save > /etc/sysconfig/iptables systemctl enable iptables.service
# iptables-save > /etc/sysconfig/iptables # systemctl enable iptables.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Thesystemctl enablecommand will cause the system to reload the iptables configuration on bootup before the network is started.
3.2.4. Direct Routing Using sysctl Copier lienLien copié sur presse-papiers!
sysctl interface. Administrators can configure two systcl settings such that the real server will not announce the VIP in ARP requests and will not reply to ARP requests for the VIP address. To enable this, enter the following commands:
echo 1 > /proc/sys/net/ipv4/conf/eth0/arp_ignore echo 2 > /proc/sys/net/ipv4/conf/eth0/arp_announce
echo 1 > /proc/sys/net/ipv4/conf/eth0/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/eth0/arp_announce
/etc/sysctl.d/arp.conf file:
net.ipv4.conf.eth0.arp_ignore = 1 net.ipv4.conf.eth0.arp_announce = 2
net.ipv4.conf.eth0.arp_ignore = 1
net.ipv4.conf.eth0.arp_announce = 2
3.3. Putting the Configuration Together Copier lienLien copié sur presse-papiers!
Important
eth0 connects to the public network and eth1 connects to the private network, then these same devices on the backup LVS router must connect to the same networks.
3.3.1. General Load Balancer Networking Tips Copier lienLien copié sur presse-papiers!
- Bringing Up Real Network Interfaces
- To open a real network interface, use the following command as
root, replacing N with the number corresponding to the interface (eth0andeth1).ifup ethNWarning
Do not use theifupscripts to open any floating IP addresses you may configure using Keepalived (eth0:1oreth1:1). Use theserviceorsystemctlcommand to startkeepalivedinstead. - Bringing Down Real Network Interfaces
- To bring down a real network interface, use the following command as
root, replacing N with the number corresponding to the interface (eth0andeth1).ifdown ethN - Checking the Status of Network Interfaces
- If you need to check which network interfaces are up at any given time, enter the following command:
ip linkTo view the routing table for a machine, issue the following command:ip route
3.3.2. Firewall Requirements Copier lienLien copié sur presse-papiers!
firewalld or iptables), you must allow VRRP traffic to pass between the keepalived nodes. To configure the firewall to allow the VRRP traffic with firewalld, run the following commands:
firewall-cmd --add-rich-rule='rule protocol value="vrrp" accept' --permanent firewall-cmd --reload
# firewall-cmd --add-rich-rule='rule protocol value="vrrp" accept' --permanent
# firewall-cmd --reload
iptables, run the following commands:
iptables -I INPUT -p vrrp -j ACCEPT iptables-save > /etc/sysconfig/iptables systemctl restart iptables
# iptables -I INPUT -p vrrp -j ACCEPT
# iptables-save > /etc/sysconfig/iptables
# systemctl restart iptables
3.4. Multi-port Services and Load Balancer Copier lienLien copié sur presse-papiers!
iptables. The default firewall administration tool in Red Hat Enterprise Linux 7 is firewalld, which can be used to configure iptables. If preferred, iptables can be used directly. See Red Hat Enterprise Linux 7 Security Guide for information on working with iptables in Red Hat Enterprise Linux 7.
3.4.1. Assigning Firewall Marks Using firewalld Copier lienLien copié sur presse-papiers!
firewalld's firewall-cmd utility.
firewalld is running:
systemctl status firewalld
# systemctl status firewalld
firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled)
Active: active (running) since Tue 2016-01-26 05:23:53 EST; 7h ago
firewalld, enter:
systemctl start firewalld
# systemctl start firewalld
firewalld is enabled to start at system start:
systemctl enable firewalld
# systemctl enable firewalld
root, and enter the following command:
firewall-cmd --list-rich-rules
# firewall-cmd --list-rich-rules
firewalld is active and rich rules are present, it displays a set of rules.
/etc/firewalld/zones/ and copy any rules worth keeping to a safe place before proceeding. Delete unwanted rich rules using a command in the following format: firewall-cmd --zone=zone --remove-rich-rule='rule' --permanent
firewall-cmd --zone=zone --remove-rich-rule='rule' --permanent
--permanent option makes the setting persistent, but the command will only take effect at next system start. If required to make the setting take effect immediately, repeat the command omitting the --permanent option.
firewall-cmd --add-rich-rule='rule protocol value="vrrp" accept' --permanent
# firewall-cmd --add-rich-rule='rule protocol value="vrrp" accept' --permanent
80, to incoming traffic destined for the floating IP address, n.n.n.n, on ports 80 and 443.
firewalld's rich language commands.
3.4.2. Assigning Firewall Marks Using iptables Copier lienLien copié sur presse-papiers!
iptables.
root, and enter the following command:
/usr/sbin/service iptables status
iptables is not running, the prompt will instantly reappear.
iptables is active, it displays a set of rules. If rules are present, enter the following command:
/sbin/service iptables stop
/etc/sysconfig/iptables and copy any rules worth keeping to a safe place before proceeding.
/usr/sbin/iptables -I INPUT -p vrrp -j ACCEPT
/usr/sbin/iptables -I INPUT -p vrrp -j ACCEPT
80, to incoming traffic destined for the floating IP address, n.n.n.n, on ports 80 and 443.
/usr/sbin/iptables -t mangle -A PREROUTING -p tcp -d n.n.n.n/32 -m multiport --dports 80,443 -j MARK --set-mark 80
/usr/sbin/iptables -t mangle -A PREROUTING -p tcp -d n.n.n.n/32 -m multiport --dports 80,443 -j MARK --set-mark 80
root and load the module for iptables before issuing rules for the first time.
iptables commands, n.n.n.n should be replaced with the floating IP for your HTTP and HTTPS virtual servers. These commands have the net effect of assigning any traffic addressed to the VIP on the appropriate ports a firewall mark of 80, which in turn is recognized by IPVS and forwarded appropriately.
Warning
3.5. Configuring FTP Copier lienLien copié sur presse-papiers!
3.5.1. How FTP Works Copier lienLien copié sur presse-papiers!
- Active Connections
- When an active connection is established, the server opens a data connection to the client from port 20 to a high range port on the client machine. All data from the server is then passed over this connection.
- Passive Connections
- When a passive connection is established, the client asks the FTP server to establish a passive connection port, which can be on any port higher than 10,000. The server then binds to this high-numbered port for this particular session and relays that port number back to the client. The client then opens the newly bound port for the data connection. Each data request the client makes results in a separate data connection. Most modern FTP clients attempt to establish a passive connection when requesting data from servers.
Note
3.5.2. How This Affects Load Balancer Routing Copier lienLien copié sur presse-papiers!
Note
ip_vs_ftp kernel module loaded. Run the following commands as an administrative user at a shell prompt to load this module and and ensure that the module loads on a reboot:
echo "ip_vs_ftp" >> /etc/modules-load.d/ip_vs_ftp.conf systemctl enable systemd-modules-load systemctl start systemd-modules-load
echo "ip_vs_ftp" >> /etc/modules-load.d/ip_vs_ftp.conf
systemctl enable systemd-modules-load
systemctl start systemd-modules-load
3.5.3. Creating Network Packet Filter Rules Copier lienLien copié sur presse-papiers!
iptables rules for the FTP service, review the information in Section 3.4, “Multi-port Services and Load Balancer ” concerning multi-port services and techniques for checking the existing network packet filtering rules.
21, to FTP traffic.
3.5.3.1. Rules for Active Connections Copier lienLien copié sur presse-papiers!
20 (the FTP data port).
iptables command allows the LVS router to accept outgoing connections from the real servers that IPVS does not know about:
/usr/sbin/iptables -t nat -A POSTROUTING -p tcp -s n.n.n.0/24 --sport 20 -j MASQUERADE
iptables command, n.n.n should be replaced with the first three values for the floating IP for the NAT interface's internal network interface defined virtual_server section of the keepalived.conf file.
3.5.3.2. Rules for Passive Connections Copier lienLien copié sur presse-papiers!
Warning
vsftpd, to use a matching port range. This can be accomplished by adding the following lines to /etc/vsftpd.conf:
pasv_min_port=10000
pasv_max_port=20000
pasv_address to override the real FTP server address should not be used since it is updated to the virtual IP address by LVS.
10000:20000 in the commands below to 1024:65535.
iptables commands have the net effect of assigning any traffic addressed to the floating IP on the appropriate ports a firewall mark of 21, which is in turn recognized by IPVS and forwarded appropriately:
/usr/sbin/iptables -t mangle -A PREROUTING -p tcp -d n.n.n.n/32 --dport 21 -j MARK --set-mark 21
/usr/sbin/iptables -t mangle -A PREROUTING -p tcp -d n.n.n.n/32 --dport 10000:20000 -j MARK --set-mark 21
iptables commands, n.n.n.n should be replaced with the floating IP for the FTP virtual server defined in the virtual_server subsection of the keepalived.conf file.
iptables-save > /etc/sysconfig/iptables
# iptables-save > /etc/sysconfig/iptables
iptables service is started at system start, enter the following command:
systemctl enable iptables
# systemctl enable iptables
systemctl restart iptables
# systemctl restart iptables
3.6. Saving Network Packet Filter Settings Copier lienLien copié sur presse-papiers!
iptables, enter the following command:
iptables-save > /etc/sysconfig/iptables
# iptables-save > /etc/sysconfig/iptables
iptables service is started at system start, enter the following command:
systemctl enable iptables
# systemctl enable iptables
systemctl restart iptables
# systemctl restart iptables
3.7. Turning on Packet Forwarding and Nonlocal Binding Copier lienLien copié sur presse-papiers!
root and change the line which reads net.ipv4.ip_forward = 0 in /etc/sysctl.conf to the following:
net.ipv4.ip_forward = 1
net.ipv4.ip_forward = 1
/etc/sysctl.conf that reads net.ipv4.ip_nonlocal_bind to the following:
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_nonlocal_bind = 1
root:
/usr/sbin/sysctl net.ipv4.ip_forward
root:
/usr/sbin/sysctl net.ipv4.ip_nonlocal_bind
1, then the respective settings are enabled.
3.8. Configuring Services on the Real Servers Copier lienLien copié sur presse-papiers!
httpd for Web services or xinetd for FTP or Telnet services.
sshd daemon should also be installed and running.
Chapter 4. Initial Load Balancer Configuration with Keepalived Copier lienLien copié sur presse-papiers!
4.1. A Basic Keepalived configuration Copier lienLien copié sur presse-papiers!
httpd with real IP addresses numbered 192.168.1.20 to 192.168.1.24, sharing a virtual IP address of 10.0.0.1. Each load balancer has two interfaces (eth0 and eth1), one for handling external Internet traffic, and the other for routing requests to the real servers. The load balancing algorithm used is Round Robin and the routing method will be Network Address Translation.
4.1.1. Creating the keapalived.conf file Copier lienLien copié sur presse-papiers!
keepalived.conf file in each system configured as a load balancer. To create a load balancer topology like the example shown in Section 4.1, “A Basic Keepalived configuration”, use a text editor to open keepalived.conf in both the active and backup load balancers, LB1 and LB2. For example:
vi /etc/keepalived/keepalived.conf
vi /etc/keepalived/keepalived.conf
keepalived.conf file as explained in the following code sections. In this example, the keepalived.conf file is the same on both the active and backup routers with the exception of the VRRP instance, as noted in Section 4.1.1.2, “VRRP Instance”
4.1.1.1. Global Definitions Copier lienLien copié sur presse-papiers!
keepalived.conf file allows administrators to specify notification details when changes to the load balancer occurs. Note that the Global Definitions are optional and are not required for Keepalived configuration. This section of the keepalived.conf file is the same on both LB1 and LB2.
notification_email is the administrator of the load balancer, while the notification_email_from is an address that sends the load balancer state changes. The SMTP specific configuration specifies the mail server from which the notifications are mailed.
4.1.1.2. VRRP Instance Copier lienLien copié sur presse-papiers!
vrrp_sync_group stanza of the keeplalived.conf file in the master router and the backup router. Note that the state and priority values differ between the two systems.
vrrp_sync_group stanza for the keepalived.conf file in LB1, the master router.
vrrp_sync_group stanza of the keepalived.conf file for LB2, the backup router.
vrrp_sync_group stanza defines the VRRP group that stays together through any state changes (such as failover). There is an instance defined for the external interface that communicates with the Internet (RH_EXT), as well as one for the internal interface (RH_INT).
vrrp_instance line details the virtual interface configuration for the VRRP service daemon, which creates virtual IP instances. The state MASTER designates the active server, the state BACKUP designates the backup server.
interface parameter assigns the physical interface name to this particular virtual IP instance.
virtual_router_id is a numerical identifier for the Virtual Router instance. It must be the same on all LVS Router systems participating in this Virtual Router. It is used to differentiate multiple instances of keepalived running on the same network interface.
priority specifies the order in which the assigned interface takes over in a failover; the higher the number, the higher the priority. This priority value must be within the range of 0 to 255, and the Load Balancing server configured as state MASTER should have a priority value set to a higher number than the priority value of the server configured as state BACKUP.
authentication block specifies the authentication type (auth_type) and password (auth_pass) used to authenticate servers for failover synchronization. PASS specifies password authentication; Keepalived also supports AH, or Authentication Headers for connection integrity.
virtual_ipaddress option specifies the interface virtual IP address.
4.1.1.3. Virtual Server Definitions Copier lienLien copié sur presse-papiers!
keepalived.conf file is the same on both LB1 and LB2.
virtual_server is configured first with the IP address. Then a delay_loop configures the amount of time (in seconds) between health checks. The lb_algo option specifies the kind of algorithm used for availability (in this case, rr for Round-Robin; for a list of possible lb_algo values see Table 4.1, “lv_algo Values for Virtual Server”). The lb_kind option determines routing method, which in this case Network Address Translation (or nat) is used.
real_server options are configured, again by specifying the IP Address first. The TCP_CHECK stanza checks for availability of the real server using TCP. The connect_timeout configures the time in seconds before a timeout occurs.
Note
| Algorithm Name | lv_algo value |
|---|---|
|
Round-Robin
| rr
|
|
Weighted Round-Robin
| wrr
|
|
Least-Connection
| lc
|
|
Weighted Least-Connection
| wlc
|
|
Locality-Based Least-Connection
| lblc
|
|
Locality-Based Least-Connection Scheduling with Replication
| lblcr
|
|
Destination Hash
| dh
|
|
Source Hash
| sh
|
|
Source Expected Delay
| sed
|
|
Never Queue
| nq
|
4.2. Keepalived Direct Routing Configuration Copier lienLien copié sur presse-papiers!
lb_kind parameter to DR. Other configuration options are discussed in Section 4.1, “A Basic Keepalived configuration”.
keepalived.conf file for the active server in a Keepalived configuration that uses direct routing.
keepalived.conf file for the backup server in a Keepalived configuration that uses direct routing. Note that the state and priority values differ from the keepalived.conf file in the active server.
4.3. Starting the service Copier lienLien copié sur presse-papiers!
systemctl start keepalived.service
# systemctl start keepalived.service
systemctl enable keepalived.service
# systemctl enable keepalived.service
Chapter 5. HAProxy Configuration Copier lienLien copié sur presse-papiers!
/etc/haproxy/haproxy.cfg file.
- The proxies section, which consists of 4 subsections:
- The Section 5.3, “Default Settings” settings
- The Section 5.4, “Frontend Settings” settings
- The Section 5.5, “Backend Settings” settings
5.1. HAProxy Scheduling Algorithms Copier lienLien copié sur presse-papiers!
balance parameter in the backend section of the /etc/haproxy/haproxy.cfg configuration file. Note that HAProxy supports configuration with multiple back ends, and each back end can be configured with a scheduling algorithm.
- Round-Robin (
roundrobin) - Distributes each request sequentially around the pool of real servers. Using this algorithm, all the real servers are treated as equals without regard to capacity or load. This scheduling model resembles round-robin DNS but is more granular due to the fact that it is network-connection based and not host-based. Load Balancer round-robin scheduling also does not suffer the imbalances caused by cached DNS queries. However, in HAProxy, since configuration of server weights can be done on the fly using this scheduler, the number of active servers are limited to 4095 per back end.
- Static Round-Robin (
static-rr) - Distributes each request sequentially around a pool of real servers as does Round-Robin, but does not allow configuration of server weight dynamically. However, because of the static nature of server weight, there is no limitation on the number of active servers in the back end.
- Least-Connection (
leastconn) - Distributes more requests to real servers with fewer active connections. Administrators with a dynamic environment with varying session or connection lengths may find this scheduler a better fit for their environments. It is also ideal for an environment where a group of servers have different capacities, as administrators can adjust weight on the fly using this scheduler.
- Source (
source) - Distributes requests to servers by hashing requesting source IP address and dividing by the weight of all the running servers to determine which server will get the request. In a scenario where all servers are running, the source IP request will be consistently served by the same real server. If there is a change in the number or weight of the running servers, the session may be moved to another server because the hash/weight result has changed.
- URI (
uri) - Distributes requests to servers by hashing the entire URI (or a configurable portion of a URI) and divides by the weight of all the running servers to determine which server will the request. In a scenario where all active servers are running, the destination IP request will be consistently served by the same real server. This scheduler can be further configured by the length of characters at the start of a directory part of a URI to compute the hash result and the depth of directories in a URI (designated by forward slashes in the URI) to compute the hash result.
- URL Parameter (
url_param) - Distributes requests to servers by looking up a particular parameter string in a source URL request and performing a hash calculation divided by the weight of all running servers. If the parameter is missing from the URL, the scheduler defaults to Round-robin scheduling. Modifiers may be used based on POST parameters as well as wait limits based on the number of maximum octets an administrator assigns to the weight for a certain parameter before computing the hash result.
- Header Name (
hdr) - Distributes requests to servers by checking a particular header name in each source HTTP request and performing a hash calculation divided by the weight of all running servers. If the header is absent, the scheduler defaults to Round-robin scheduling.
- RDP Cookie (
rdp-cookie) - Distributes requests to servers by looking up the RDP cookie for every TCP request and performing a hash calculation divided by the weight of all running servers. If the header is absent, the scheduler defaults to Round-robin scheduling. This method is ideal for persistence as it maintains session integrity.
5.2. Global Settings Copier lienLien copié sur presse-papiers!
global settings configure parameters that apply to all servers running HAProxy. A typical global section may look like the following:
log all entries to the local syslog server. By default, this could be /var/log/syslog or some user-designated location.
maxconn parameter specifies the maximum number of concurrent connections for the service. By default, the maximum is 2000.
user and group parameters specifies the user name and group name for which the haproxy process belongs.
daemon parameter specifies that haproxy run as a background process.
5.3. Default Settings Copier lienLien copié sur presse-papiers!
default settings configure parameters that apply to all proxy subsections in a configuration (frontend, backend, and listen). A typical default section may look like the following:
Note
proxy subsection (frontend, backend, or listen) takes precedence over the parameter value in default.
mode specifies the protocol for the HAProxy instance. Using the http mode connects source requests to real servers based on HTTP, ideal for load balancing web servers. For other applications, use the tcp mode.
log specifies log address and syslog facilities to which log entries are written. The global value refers the HAProxy instance to whatever is specified in the log parameter in the global section.
option httplog enables logging of various values of an HTTP session, including HTTP requests, session status, connection numbers, source address, and connection timers among other values.
option dontlognull disables logging of null connections, meaning that HAProxy will not log connections wherein no data has been transferred. This is not recommended for environments such as web applications over the Internet where null connections could indicate malicious activities such as open port-scanning for vulnerabilities.
retries specifies the number of times a real server will retry a connection request after failing to connect on the first try.
timeout values specify the length of time of inactivity for a given request, connection, or response. These values are generally expressed in milliseconds (unless explicitly stated otherwise) but may be expressed in any other unit by suffixing the unit to the numeric value. Supported units are us (microseconds), ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). http-request 10s gives 10 seconds to wait for a complete HTTP request from a client. queue 1m sets one minute as the amount of time to wait before a connection is dropped and a client receives a 503 or "Service Unavailable" error. connect 10s specifies the number of seconds to wait for a successful connection to a server. client 1m specifies the amount of time (in minutes) a client can remain inactive (it neither accepts nor sends data). server 1m specifies the amount of time (in minutes) a server is given to accept or send data before timeout occurs.
5.4. Frontend Settings Copier lienLien copié sur presse-papiers!
frontend settings configure the servers' listening sockets for client connection requests. A typical HAProxy configuration of the frontend may look like the following:
frontend main bind 192.168.0.10:80 default_backend app
frontend main
bind 192.168.0.10:80
default_backend app
frontend called main is configured to the 192.168.0.10 IP address and listening on port 80 using the bind parameter. Once connected, the use backend specifies that all sessions connect to the app back end.
5.5. Backend Settings Copier lienLien copié sur presse-papiers!
backend settings specify the real server IP addresses as well as the load balancer scheduling algorithm. The following example shows a typical backend section:
app. The balance specifies the load balancer scheduling algorithm to be used, which in this case is Round Robin (roundrobin), but can be any scheduler supported by HAProxy. For more information configuring schedulers in HAProxy, see Section 5.1, “HAProxy Scheduling Algorithms”.
server lines specify the servers available in the back end. app1 to app4 are the names assigned internally to each real server. Log files will specify server messages by name. The address is the assigned IP address. The value after the colon in the IP address is the port number to which the connection occurs on the particular server. The check option flags a server for periodic healthchecks to ensure that it is available and able receive and send data and take session requests. Server app3 also configures the healthcheck interval to two seconds (inter 2s), the amount of checks app3 has to pass to determine if the server is considered healthy (rise 4), and the number of times a server consecutively fails a check before it is considered failed (fall 3).
5.6. Starting haproxy Copier lienLien copié sur presse-papiers!
systemctl start haproxy.service
# systemctl start haproxy.service
systemctl enable haproxy.service
# systemctl enable haproxy.service
5.7. Logging HAProxy Messages to rsyslog Copier lienLien copié sur presse-papiers!
rsyslog by writing to the /dev/log socket. Alternately you can target the TCP loopback address, however this results in slower performance.
rsyslog.
- In the
globalsection of the HAProxy configuration file, use thelogdirective to target the/dev/logsocket.log /dev/log local0
log /dev/log local0Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Update the
frontend,backend, andlistenproxies to send messages to thersyslogservice you configured in theglobalsection of the HAProxy configuration file. To do this, add alog globaldirective to thedefaultssection of the configuration file, as shown.defaults log global option httplogdefaults log global option httplogCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If you are running HAProxy within a
chrootedenvironment, or you let HAProxy create achrootdirectory for you by using thechrootconfiguration directive, then the socket must be made available within thatchrootdirectory. You can do this by modifying thersyslogconfiguration to create a new listening socket within thechrootfilesystem. To do this, add the following lines to yourrsyslogconfiguration file.$ModLoad imuxsock $AddUnixListenSocket PATH_TO_CHROOT/dev/log
$ModLoad imuxsock $AddUnixListenSocket PATH_TO_CHROOT/dev/logCopy to Clipboard Copied! Toggle word wrap Toggle overflow - To customize what and where HAProxy log messages will appear, you can use
rsyslogfilters as described in Basic Configuration of Rsyslog in the System Administrator's Guide.
Appendix A. Example Configuration: Load Balancing Ceph Object Gateway Servers with HAProxy and Keepalived Copier lienLien copié sur presse-papiers!
A.1. Prerequisites Copier lienLien copié sur presse-papiers!
- A running Ceph cluster;
- At least two Ceph Object Gateway servers within the same zone configured to run on port 80;
- At least two servers for HAProxy and keepalived.
Note
A.2. Preparing HAProxy Nodes Copier lienLien copié sur presse-papiers!
haproxy and haproxy2, and two Ceph Object Gateway servers named rgw1 and rgw2. You may use any naming convention you prefer. Perform the following procedure on your two HAProxy nodes:
- Install Red Hat Enterprise Linux 7.
- Register the nodes.
subscription-manager register
# subscription-manager registerCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Enable the Red Hat Enterprise Linux 7 server repository.
subscription-manager repos --enable=rhel-7-server-rpms
# subscription-manager repos --enable=rhel-7-server-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Update the server.
yum update -y
# yum update -yCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Install admin tools (for example,
wget,vim, and so on) as needed. - Open port 80.
firewall-cmd --zone=public --add-port 80/tcp --permanent firewall-cmd --reload
# firewall-cmd --zone=public --add-port 80/tcp --permanent # firewall-cmd --reloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow - For HTTPS, open port 443.
firewall-cmd --zone=public --add-port 443/tcp --permanent firewall-cmd --reload
# firewall-cmd --zone=public --add-port 443/tcp --permanent # firewall-cmd --reloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow
A.3. Install and Configure keepalived Copier lienLien copié sur presse-papiers!
- Install keepalived.
yum install -y keepalived
# yum install -y keepalivedCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Configure keepalived.
vim /etc/keepalived/keepalived.conf
# vim /etc/keepalived/keepalived.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow In the following configuration, there is a script to check the HAProxy processes. The instance useseth0as the network interface and configureshaproxyas the master server andhaproxy2as the backup server. It also assigns a virtual IP address of 192.168.0.100.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Enable and start keepalived.
systemctl enable keepalived systemctl start keepalived
# systemctl enable keepalived # systemctl start keepalivedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
A.4. Install and Configure HAProxy Copier lienLien copié sur presse-papiers!
- Install
haproxy.yum install haproxy
# yum install haproxyCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Configure
haproxyfor SELinux and HTTP.vim /etc/firewalld/services/haproxy-http.xml
# vim /etc/firewalld/services/haproxy-http.xmlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following lines:Copy to Clipboard Copied! Toggle word wrap Toggle overflow As root, assign the correct SELinux context and file permissions to thehaproxy-http.xmlfile.cd /etc/firewalld/services restorecon haproxy-http.xml chmod 640 haproxy-http.xml
# cd /etc/firewalld/services # restorecon haproxy-http.xml # chmod 640 haproxy-http.xmlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If you intend to use HTTPS, configure
haproxyfor SELinux and HTTPS.vim /etc/firewalld/services/haproxy-https.xml
# vim /etc/firewalld/services/haproxy-https.xmlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following lines:Copy to Clipboard Copied! Toggle word wrap Toggle overflow As root, assign the correct SELinux context and file permissions to thehaproxy-https.xmlfile.cd /etc/firewalld/services restorecon haproxy-https.xml chmod 640 haproxy-https.xml
# cd /etc/firewalld/services # restorecon haproxy-https.xml # chmod 640 haproxy-https.xmlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If you intend to use HTTPS, generate keys for SSL. If you do not have a certificate, you may use a self-signed certificate. For information on generating keys and on self-signed certificates, see the Red Hat Enterprise Linux System Administrator's Guide.Finally, put the certificate and key into a PEM file.
cat example.com.crt example.com.key > example.com.pem cp example.com.pem /etc/ssl/private/
# cat example.com.crt example.com.key > example.com.pem # cp example.com.pem /etc/ssl/private/Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Configure HAProxy.
vim /etc/haproxy/haproxy.cfg
# vim /etc/haproxy/haproxy.cfgCopy to Clipboard Copied! Toggle word wrap Toggle overflow Theglobalanddefaultssections ofhaproxy.cfgmay remain unchanged. After thedefaultssections, you will need to configurefrontendandbackendsections, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Enable/start
haproxysystemctl enable haproxy systemctl start haproxy
# systemctl enable haproxy # systemctl start haproxyCopy to Clipboard Copied! Toggle word wrap Toggle overflow
A.5. Test Your HAProxy Configuration Copier lienLien copié sur presse-papiers!
keepalived configuration appears.
ip addr show
$ ip addr show
wget haproxy
$ wget haproxy
wget rgw1
$ wget rgw1
index.html file with the following contents then your configuration is working properly.
Appendix B. Revision History Copier lienLien copié sur presse-papiers!
| Revision History | |||
|---|---|---|---|
| Revision 4.1-1 | Wed Aug 7 2019 | ||
| |||
| Revision 3.1-2 | Thu Oct 4 2018 | ||
| |||
| Revision 2.1-1 | Thu Mar 15 2018 | ||
| |||
| Revision 2.1-0 | Thu Dec 14 2017 | ||
| |||
| Revision 0.6-5 | Wed Nov 22 2017 | ||
| |||
| Revision 0.6-3 | Thu Jul 27 2017 | ||
| |||
| Revision 0.6-1 | Wed May 10 2017 | ||
| |||
| Revision 0.5-9 | Mon Dec 5 2016 | ||
| |||
| Revision 0.5-7 | Mon Oct 17 2016 | ||
| |||
| Revision 0.5-6 | Thu Aug 18 2016 | ||
| |||
| Revision 0.3-2 | Mon Nov 9 2015 | ||
| |||
| Revision 0.3-0 | Wed Aug 19 2015 | ||
| |||
| Revision 0.2-6 | Mon Feb 16 2015 | ||
| |||
| Revision 0.2-5 | Thu Dec 11 2014 | ||
| |||
| Revision 0.2-4 | Thu Dec 04 2014 | ||
| |||
| Revision 0.1-12 | Tue Jun 03 2014 | ||
| |||
| Revision 0.1-6 | Mon Jun 13 2013 | ||
| |||
| Revision 0.1-1 | Wed Jan 16 2013 | ||
| |||
Index Copier lienLien copié sur presse-papiers!
A
- arptables, Direct Routing Using arptables
D
- direct routing
- and arptables, Direct Routing Using arptables
- and firewalld, Direct Routing Using firewalld
F
- firewalld, Direct Routing Using firewalld
- FTP, Configuring FTP
- (see also Load Balancer )
H
- HAProxy, haproxy
- HAProxy and Keepalived, keepalived and haproxy
J
- job scheduling, Keepalived , keepalived Scheduling Overview
K
- Keepalived
- configuration, A Basic Keepalived configuration
- configuration file, Creating the keapalived.conf file
- initial configuration, Initial Load Balancer Configuration with Keepalived
- job scheduling, keepalived Scheduling Overview
- scheduling, job, keepalived Scheduling Overview
- Keepalived configuration
- Direct Routing, Keepalived Direct Routing Configuration
- keepalived daemon, keepalived
- keepalived.conf, Creating the keapalived.conf file
- Keepalivedd
- LVS routers
- primary node, Initial Load Balancer Configuration with Keepalived
L
- least connections (see job scheduling, Keepalived )
- Load Balancer
- direct routing
- and arptables, Direct Routing Using arptables
- and firewalld, Direct Routing Using firewalld
- requirements, hardware, Direct Routing, Load Balancer Using Direct Routing
- requirements, network, Direct Routing, Load Balancer Using Direct Routing
- requirements, software, Direct Routing, Load Balancer Using Direct Routing
- HAProxy, haproxy
- HAProxy and Keepalived, keepalived and haproxy
- Keepalived, A Basic Keepalived configuration, Keepalived Direct Routing Configuration
- keepalived daemon, keepalived
- multi-port services, Multi-port Services and Load Balancer
- FTP, Configuring FTP
- NAT routing
- requirements, hardware, The NAT Load Balancer Network
- requirements, network, The NAT Load Balancer Network
- requirements, software, The NAT Load Balancer Network
- packet forwarding, Turning on Packet Forwarding and Nonlocal Binding
- routing methods
- NAT, Routing Methods
- routing prerequisites, Configuring Network Interfaces for Load Balancer with NAT
- three-tier, A Three-Tier keepalived Load Balancer Configuration
- LVS
- NAT routing
- enabling, Enabling NAT Routing on the LVS Routers
- overview of, Load Balancer Overview
- real servers, Load Balancer Overview
M
- multi-port services, Multi-port Services and Load Balancer
- (see also Load Balancer )
N
- NAT
- enabling, Enabling NAT Routing on the LVS Routers
- routing methods, Load Balancer , Routing Methods
- network address translation (see NAT)
P
- packet forwarding, Turning on Packet Forwarding and Nonlocal Binding
- (see also Load Balancer)
R
- real servers
- configuring services, Configuring Services on the Real Servers
- round robin (see job scheduling, Keepalived )
- routing
- prerequisites for Load Balancer , Configuring Network Interfaces for Load Balancer with NAT
S
- scheduling, job (Keepalived ), keepalived Scheduling Overview
W
- weighted least connections (see job scheduling, Keepalived )
- weighted round robin (see job scheduling, Keepalived )