Chapter 15. Configure Load Balancing-as-a-Service (LBaaS)


Load Balancing-as-a-Service (LBaaS) enables OpenStack Networking to distribute incoming requests evenly between designated instances. This step-by-step guide configures OpenStack Networking to use LBaaS with the Open vSwitch (OVS) plug-in.

Introduced in Red Hat OpenStack Platform 5, Load Balancing-as-a-Service (LBaaS) enables OpenStack Networking to distribute incoming requests evenly between designated instances. This ensures the workload is shared predictably among instances, and allows more effective use of system resources. Incoming requests are distributed using one of these load balancing methods:

  • Round robin - Rotates requests evenly between multiple instances.
  • Source IP - Requests from a unique source IP address are consistently directed to the same instance.
  • Least connections - Allocates requests to the instance with the least number of active connections.
Table 15.1. LBaaS features
FeatureDescription

Monitors

LBaaS provides availability monitoring with the ping, TCP, HTTP and HTTPS GET methods. Monitors are implemented to determine whether pool members are available to handle requests.

Management

LBaaS is managed using a variety of tool sets. The REST API is available for programmatic administration and scripting. Users perform administrative management of load balancers through either the CLI (neutron) or the OpenStack dashboard.

Connection limits

Ingress traffic can be shaped with connection limits. This feature allows workload control and can also assist with mitigating DoS (Denial of Service) attacks.

Session persistence

LBaaS supports session persistence by ensuring incoming requests are routed to the same instance within a pool of multiple instances. LBaaS supports routing decisions based on cookies and source IP address.

Note

LBaaS is currently supported only with IPv4 addressing.

Note

LBaaSv1 has been removed in Red Hat OpenStack Platform 11 (Ocata) and replaced by LBaaSv2.

15.1. OpenStack Networking and LBaaS Topology

OpenStack Networking (neutron) services can be broadly classified into two categories.

1. - Neutron API server - This service runs the OpenStack Networking API server, which has the main responsibility of providing an API for end users and services to interact with OpenStack Networking. This server also has the responsibility of interacting with the underlying database to store and retrieve tenant network, router, and loadbalancer details, among others.

2. - Neutron Agents - These are the services that deliver various network functionality for OpenStack Networking.

  • neutron-dhcp-agent - manages DHCP IP addressing for tenant private networks.
  • neutron-l3-agent - facilitates layer 3 routing between tenant private networks, the external network, and others.
  • neutron-lbaasv2-agent - provisions the LBaaS routers created by tenants.

The following diagram describes the flow of HTTPS traffic through to a pool member:

lbaas
15.1.1. Support Status of LBaaS
  • LBaaS v1 API was removed in version 10.
  • LBaaS v2 API was introduced in Red Hat OpenStack Platform 7, and is the only available LBaaS API as of version 10.
  • LBaaS deployment is not currently supported in Red Hat OpenStack Platform director.
15.1.2. Service Placement

The OpenStack Networking services can either run together on the same physical server, or on separate dedicated servers.

Note

Red Hat OpenStack Platform 11 added support for composable roles, allowing you to separate network services into a custom role. However, for simplicity, this guide assumes that a deployment uses the default controller role.

The server that runs API server is usually called the Controller node, whereas the server that runs the OpenStack Networking agents is called the Network node. An ideal production environment would separate these components to their own dedicated nodes for performance and scalability reasons, but a testing or PoC deployment might have them all running on the same node. This chapter covers both of these scenarios; the section under Controller node configuration need to be performed on the API server, whereas the section on Network node is performed on the server that runs the LBaaS agent.

Note

If both the Controller and Network roles are on the same physical node, then the steps must be performed on that server.

15.2. Configure LBaaS

This procedure configures OpenStack Networking (neutron) to use LBaaS with the Open vSwitch (OVS) plug-in. Perform these steps on nodes running the neutron-server service.

Note

On the Controller node (API Server):

  1. Install HAProxy:

    # yum install haproxy
  2. Add the LBaaS tables to the neutron database:

    $ neutron-db-manage --subproject neutron-lbaas --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head
  3. Change the service provider in /etc/neutron/neutron_lbaas.conf. In the [service providers] section, comment out (#) all entries except for:

    service_provider=LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
  4. In /etc/neutron/neutron.conf, confirm that you have the LBaaS v2 plugin configured in service_plugins:

    service_plugins=neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2

    You can also expect to see any other plugins you have previously added.

    Note

    If you have lbaasv1 configured, replace it with the above setting for lbaasv2.

  5. In /etc/neutron/lbaas_agent.ini, add the following to the [DEFAULT] section:

    ovs_use_veth = False
    interface_driver =neutron.agent.linux.interface.OVSInterfaceDriver
  6. In /etc/neutron/services_lbaas.conf, add the following to the [haproxy] section:

    user_group = haproxy
    1. Comment out any other device driver entries.

      Note

      If the l3-agent is in a failed mode, see the l3_agent log files. You may need to edit /etc/neutron/neutron.conf and comment out certain values in [DEFAULT], and uncomment the corresponding values in oslo_messaging_rabbit, as described in the log file.

  7. Configure the LbaaS services, and review their status:

    1. Stop the lbaasv1 services and start lbaasv2:

      # systemctl disable neutron-lbaas-agent.service
      # systemctl stop neutron-lbaas-agent.service
      # systemctl mask neutron-lbaas-agent.service
      # systemctl enable neutron-lbaasv2-agent.service
      # systemctl start neutron-lbaasv2-agent.service
    2. Review the status of lbaasv2:

      # systemctl status neutron-lbaasv2-agent.service
    3. Restart neutron-server and check the status:

      # systemctl restart neutron-server.service
      # systemctl status neutron-server.service
    4. Check the Loadbalancerv2 agent:

      $ neutron agent-list

15.3. Automatically Reschedule Load Balancers

You can configure neutron to automatically reschedule load balancers from failed LBaaS agents. Previously, load balancers could be scheduled across multiple LBaaS agents, however if a hypervisor died, the load balancers scheduled to that node would cease operation. Now these load balancers can be automatically rescheduled to a different agent. This feature is turned off by default and controlled using allow_automatic_lbaas_agent_failover.

15.3.1. Enable Automatic Failover
Note

You will need at least two nodes running the Loadbalancerv2 agent.

  1. On all nodes running the Loadbalancerv2 agent, edit /etc/neutron/neutron_lbaas.conf:

    allow_automatic_lbaas_agent_failover=True
  2. Restart the LBaaS agents and neutron-server:

    systemctl restart neutron-lbaasv2-agent
    systemctl restart neutron-server.service
  3. Review the state of the agents:

    $ neutron agent-list
    +--------------------------------------+----------------------+-------------------+-------------------+-------+----------------+---------------------------+
    | id                                   | agent_type           | host              | availability_zone | alive | admin_state_up | binary                    |
    +--------------------------------------+----------------------+-------------------+-------------------+-------+----------------+---------------------------+
    | 2af49b85-7a55-4420-97e0-186c233cce08 | Open vSwitch agent   | node1 |                   | :-)   | True           | neutron-openvswitch-agent |
    | 2d81c836-2f85-47c2-9cdc-665aa796e977 | DHCP agent           | node1 | nova              | :-)   | True           | neutron-dhcp-agent        |
    | 58fa7369-ea35-4663-ae34-97518e847741 | Open vSwitch agent   | node2 |                   | :-)   | True           | neutron-openvswitch-agent |
    | 7b665b9d-4c7e-4da1-a37a-1007af6444fc | Loadbalancerv2 agent | node1 |                   | :-)   | True           | neutron-lbaasv2-agent     |
    | 88f4c436-7152-4d30-a9e8-a793750bcbba | Loadbalancerv2 agent | node2 |                   | :-)   | True           | neutron-lbaasv2-agent     |
    | de6640a1-17a7-4ceb-986a-3b0de3b8845e | Metadata agent       | node1 |                   | :-)   | True           | neutron-metadata-agent    |
    | e4f77843-48e9-43af-a1af-884c07714416 | L3 agent             | node1 | nova              | :-)   | True           | neutron-l3-agent          |
    +--------------------------------------+----------------------+-------------------+-------------------+-------+----------------+---------------------------+
15.3.2. Sample Failover Configuration
  1. Create a new loadbalancer:

    $ neutron lbaas-loadbalancer-create --name lb1 private-subnet
    Created a new loadbalancer:
    +---------------------+--------------------------------------+
    | Field               | Value                                |
    +---------------------+--------------------------------------+
    | admin_state_up      | True                                 |
    | description         |                                      |
    | id                  | b130e956-b8d1-4290-ab83-febc19797683 |
    | listeners           |                                      |
    | name                | lb1                                  |
    | operating_status    | OFFLINE                              |
    | pools               |                                      |
    | provider            | haproxy                              |
    | provisioning_status | PENDING_CREATE                       |
    | tenant_id           | 991b8c905d644900948b4540d9815fa9     |
    | vip_address         | 10.0.0.3                             |
    | vip_port_id         | 89f05da4-f820-470d-95c7-d13fe09a2b6f |
    | vip_subnet_id       | 6c8f7812-7fd2-4e79-bf96-0b85f47bea49 |
    +---------------------+--------------------------------------+
  2. Create a listener so that haproxy is spawned:

    $ neutron lbaas-listener-create --loadbalancer lb1 --protocol HTTP --protocol-port 80 --name listener1
    Created a new listener:
    +---------------------------+------------------------------------------------+
    | Field                     | Value                                          |
    +---------------------------+------------------------------------------------+
    | admin_state_up            | True                                           |
    | connection_limit          | -1                                             |
    | default_pool_id           |                                                |
    | default_tls_container_ref |                                                |
    | description               |                                                |
    | id                        | 538c13bf-6b27-441d-8ac7-d54ef6b7a503           |
    | loadbalancers             | {"id": "b130e956-b8d1-4290-ab83-febc19797683"} |
    | name                      | listener1                                      |
    | protocol                  | HTTP                                           |
    | protocol_port             | 80                                             |
    | sni_container_refs        |                                                |
    | tenant_id                 | 991b8c905d644900948b4540d9815fa9               |
    +---------------------------+------------------------------------------------+
  3. Check which LBaaS agent the loadbalancer was scheduled to:

    $ neutron lbaas-agent-hosting-loadbalancer lb1
    +--------------------------------------+-------------------+----------------+-------+
    | id                                   | host              | admin_state_up | alive |
    +--------------------------------------+-------------------+----------------+-------+
    | 88f4c436-7152-4d30-a9e8-a793750bcbba | node2 | True           | :-)   |
    +--------------------------------------+-------------------+----------------+-------+
  4. Verify that haproxy runs on that node. In addition, verify that it is not currently running on the other node:

    stack@node2:~/$ ps -ef | grep "haproxy -f" | grep lbaas
    nobody   14503     1  0 17:14 ?        00:00:00 haproxy -f /opt/openstack/data/neutron/lbaas/v2/b130e956-b8d1-4290-ab83-febc19797683/haproxy.conf -p /opt/openstack/data/neutron/lbaas/v2/b130e956-b8d1-4290-ab83-febc19797683/haproxy.pid
    
    stack@node1:~/$ ps -ef | grep "haproxy -f" | grep lbaas
    stack@node1:~/$
  5. Kill the hosting lbaas agent process, then wait for neutron-server to recognize this event and consequently reschedule the loadbalancer:

    $ neutron lbaas-agent-hosting-loadbalancer lb1
    +--------------------------------------+-------------------+----------------+-------+
    | id                                   | host              | admin_state_up | alive |
    +--------------------------------------+-------------------+----------------+-------+
    | 88f4c436-7152-4d30-a9e8-a793750bcbba | node2 | True           | xxx   |
    +--------------------------------------+-------------------+----------------+-------+
    • Review q-svc log:

      2016-12-08 17:17:48.427 WARNING neutron.db.agents_db [req-1518b1ee-1ce3-4813-9999-9e9323666df7 None None] Agent healthcheck: found 1 dead agents out of 7:
                      Type       Last heartbeat host
      Loadbalancerv2 agent  2016-12-08 17:15:11 node2
      2016-12-08 17:18:06.000 WARNING neutron.db.agentschedulers_db [req-d0c689d4-434b-4db7-8140-27d3d3442dec None None] Rescheduling loadbalancer b130e956-b8d1-4290-ab83-febc19797683 from agent 88f4c436-7152-4d30-a9e8-a793750bcbba because the agent did not report to the server in the last 150 seconds.
  6. Verify that the loadbalancer was actually rescheduled to the live lbaas agent, and that haproxy is running:

    $ neutron lbaas-agent-hosting-loadbalancer lb1
    +--------------------------------------+-------------------+----------------+-------+
    | id                                   | host              | admin_state_up | alive |
    +--------------------------------------+-------------------+----------------+-------+
    | 7b665b9d-4c7e-4da1-a37a-1007af6444fc | node1 | True           | :-)   |
    +--------------------------------------+-------------------+----------------+-------+
    • Review the state of the haproxy process:

      stack@node1:~/$ ps -ef | grep "haproxy -f" | grep lbaas
      nobody     768     1  0 17:18 ?        00:00:00 haproxy -f /opt/openstack/data/neutron/lbaas/v2/b130e956-b8d1-4290-ab83-febc19797683/haproxy.conf -p /opt/openstack/data/neutron/lbaas/v2/b130e956-b8d1-4290-ab83-febc19797683/haproxy.pid
  7. Revive the lbaas agent that you killed in step 4. Verify haproxy is no longer running on that node and that neutron-server recognizes the agent:

    stack@node2:~/$ ps -ef | grep "haproxy -f" | grep lbaas
    stack@node2:~/$
  8. Review the agent list:

    $ neutron agent-list
    +--------------------------------------+----------------------+-------------------+-------------------+-------+----------------+---------------------------+
    | id                                   | agent_type           | host              | availability_zone | alive | admin_state_up | binary                    |
    +--------------------------------------+----------------------+-------------------+-------------------+-------+----------------+---------------------------+
    | 2af49b85-7a55-4420-97e0-186c233cce08 | Open vSwitch agent   | node1 |                   | :-)   | True           | neutron-openvswitch-agent |
    | 2d81c836-2f85-47c2-9cdc-665aa796e977 | DHCP agent           | node1 | nova              | :-)   | True           | neutron-dhcp-agent        |
    | 58fa7369-ea35-4663-ae34-97518e847741 | Open vSwitch agent   | node2 |                   | :-)   | True           | neutron-openvswitch-agent |
    | 7b665b9d-4c7e-4da1-a37a-1007af6444fc | Loadbalancerv2 agent | node1 |                   | :-)   | True           | neutron-lbaasv2-agent     |
    | 88f4c436-7152-4d30-a9e8-a793750bcbba | Loadbalancerv2 agent | node2 |                   | :-)   | True           | neutron-lbaasv2-agent     |
    | de6640a1-17a7-4ceb-986a-3b0de3b8845e | Metadata agent       | node1 |                   | :-)   | True           | neutron-metadata-agent    |
    | e4f77843-48e9-43af-a1af-884c07714416 | L3 agent             | node1 | nova              | :-)   | True           | neutron-l3-agent          |
    +--------------------------------------+----------------------+-------------------+-------------------+-------+----------------+---------------------------+
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.