Chapter 15. Load Balancing-as-a-Service (LBaaS) with Octavia


The OpenStack Load-balancing service (Octavia) provides a Load Balancing-as-a-Service (LBaaS) version 2 implementation for Red Hat OpenStack platform director installations. This section contains information about enabling Octavia and assumes that Octavia services are hosted on the same nodes as the Networking API server. By default, the load-balancing services are on the Controller nodes.

Note

Red Hat does not support a migration path from Neutron-LBaaS to Octavia. However, there are some unsupported open source tools that are available. For more information, see https://github.com/nmagnezi/nlbaas2octavia-lb-replicator/tree/stable_1.

15.1. Overview of Octavia

Octavia uses a set of instances on a Compute node called amphorae and communicates with the amphorae over a load-balancing management network (lb-mgmt-net).

Octavia includes the following services:

API Controller(octavia_api container)
Communicates with the controller worker for configuration updates and to deploy, monitor, or remove amphora instances.
Controller Worker(octavia_worker container)
Sends configuration and configuration updates to amphorae over the LB network.
Health Manager
Monitors the health of individual amphorae and handles failover events if amphorae fail unexpectedly.
Important

Health monitors of type PING only check if the member is reachable and responds to ICMP echo requests. PING does not detect if your application running on that instance is healthy or not. PING should only be used in specific cases where an ICMP echo request is a valid health check.

Housekeeping Manager
Cleans up stale (deleted) database records, manages the spares pool, and manages amphora certificate rotation.
Loadbalancer
The top API object that represents the load balancing entity. The VIP address is allocated when the loadbalancer is created. When creating the loadbalancer, an Amphora instance launches on the compute node.
Amphora
The instance that performs the load balancing. Amphorae are typically instances running on Compute nodes and are configured with load balancing parameters according to the listener, pool, health monitor, L7 policies, and members configuration. Amphorae send a periodic heartbeat to the Health Manager.
Listener
The listening endpoint,for example HTTP, of a load balanced service. A listener might refer to several pools and switch between pools using layer 7 rules.
Pool
A group of members that handle client requests from the load balancer (amphora). A pool is associated with only one listener.
Member
Compute instances that serve traffic behind the load balancer (amphora) in a pool.

The following diagram shows the flow of HTTPS traffic through to a pool member:

OpenStack Networking Guide 471659 0518 LBaaS Topology

15.2. Octavia software requirements

Octavia requires that you configure the following core OpenStack components:

  • Compute (nova)
  • Networking (enable allowed_address_pairs)
  • Image (glance)
  • Identity (keystone)
  • RabbitMQ
  • MySQL

15.3. Prerequisites for the undercloud

This section assumes that:

  • your undercloud is already installed and ready to deploy an overcloud with Octavia enabled.
  • only container deployments are supported.
  • Octavia runs on your Controller node.
Note

If you want to enable the Octavia service on an existing overcloud deployment, you must prepare the undercloud. Failure to do so results in the overcloud installation being reported as successful yet without Octavia running. To prepare the undercloud, see the Transitioning to Containerized Services guide.

15.3.1. Octavia feature support matrix

Table 15.1. Octavia feature support matrix
FeatureSupport level in RHOSP 16.0

ML2/OVS L3 HA

Full support

ML2/OVS DVR

Full support

ML2/OVS L3 HA + composable network node [1]

Full support

ML2/OVS DVR + composable network node [1]

Full support

ML2/OVN L3 HA

Full support

ML2/OVN DVR

Full support

Amphora active-standby

Technology Preview only

Terminated HTTPS load balancers

Full support

Amphora spare pool

Technology Preview only

UDP

Technology Preview only

Backup members

Technology Preview only

Provider framework

Technology Preview only

TLS client authentication

Technology Preview only

TLS backend encryption

Technology Preview only

Octavia flavors

Full support

Object tags

Technology Preview only

Listener API timeouts

Full support

Log offloading

Technology Preview only

VIP access control list

Full support

Volume-based amphora

No support

[1] Network node with OVS, metadata, DHCP, L3, and Octavia (worker, health monitor, house keeping).

15.4. Planning your Octavia deployment

Red Hat OpenStack Platform provides a workflow task to simplify the post-deployment steps for the Load-balancing service. This Octavia workflow runs a set of Ansible playbooks to provide the following post-deployment steps as the last phase of the overcloud deployment:

  • Configure certificates and keys.
  • Configure the load-balancing management network between the amphorae and the Octavia Controller worker and health manager.
Note

Do not modify the OpenStack heat templates directly. Create a custom environment file (for example, octavia-environment.yaml) to override default parameter values.

Amphora Image

On pre-provisioned servers, you must install the amphora image on the undercloud before deploying octavia:

$ sudo dnf install octavia-amphora-image-x86_64.noarch

On servers that are not pre-provisioned, Red Hat OpenStack Platform director automatically downloads the default amphora image, uploads it to the overcloud Image service, and then configures Octavia to use this amphora image. During a stack update or upgrade, director updates this image to the latest amphora image.

Note

Custom amphora images are not supported.

15.4.1. Configuring Octavia certificates and keys

Octavia containers require secure communication with load balancers and with each other. You can specify your own certificates and keys or allow these to be automatically generated by Red Hat OpenStack Platform director. We recommend you allow director to automatically create the required private certificate authorities and issue the necessary certificates.

If you must use your own certificates and keys, complete the following steps:

  1. On the machine on which you run the openstack overcloud deploy command, create a custom YAML environment file.

    $ vi /home/stack/templates/octavia-environment.yaml
  2. In the YAML environment file, add the following parameters with values appropriate for your site:

    • OctaviaCaCert:

      The certificate for the CA Octavia will use to generate certificates.

    • OctaviaCaKey:

      The private CA key used to sign the generated certificates.

    • OctaviaClientCert:

      The client certificate and un-encrypted key issued by the Octavia CA for the controllers.

    • OctaviaCaKeyPassphrase:

      The passphrase used with the private CA key above.

    • OctaviaGenerateCerts:

      The Boolean that instructs director to enable (true) or disable (false) automatic certificate and key generation.

      Here is an example:

      Note

      The certificates and keys are multi-line values, and you must indent all of the lines to the same level.

      parameter_defaults:
          OctaviaCaCert: |
            -----BEGIN CERTIFICATE-----
            MIIDgzCCAmugAwIBAgIJAKk46qw6ncJaMA0GCSqGSIb3DQEBCwUAMFgxCzAJBgNV
            [snip]
            sFW3S2roS4X0Af/kSSD8mlBBTFTCMBAj6rtLBKLaQbIxEpIzrgvp
            -----END CERTIFICATE-----
      
          OctaviaCaKey: |
            -----BEGIN RSA PRIVATE KEY-----
            Proc-Type: 4,ENCRYPTED
            [snip]
            -----END RSA PRIVATE KEY-----[
      
          OctaviaClientCert: |
            -----BEGIN CERTIFICATE-----
            MIIDmjCCAoKgAwIBAgIBATANBgkqhkiG9w0BAQsFADBcMQswCQYDVQQGEwJVUzEP
            [snip]
            270l5ILSnfejLxDH+vI=
            -----END CERTIFICATE-----
            -----BEGIN PRIVATE KEY-----
            MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQDU771O8MTQV8RY
            [snip]
            KfrjE3UqTF+ZaaIQaz3yayXW
            -----END PRIVATE KEY-----
      
          OctaviaCaKeyPassphrase:
            b28c519a-5880-4e5e-89bf-c042fc75225d
      
          OctaviaGenerateCerts: false
          [rest of file snipped]
Warning

If you use the default certificates that director generates, and if you set the OctaviaGenerateCerts parameter to false, then your certificates do not renew automatically when they expire.

15.5. Deploying Octavia

You deploy Octavia using the Red Hat OpenStack Platform (RHOSP) director. Director uses heat templates to deploy Octavia (and other RHOSP components).

Ensure that your environment has access to the Octavia image. For more information about image registry methods, see the Containerization section of the Director Installation and Usage guide.

To deploy Octavia in the overcloud:

$ openstack overcloud deploy --templates -e \
/usr/share/openstack-tripleo-heat-templates/environments/services/octavia.yaml
Note

The director updates the amphora image to the latest amphora image during a stack update or upgrade.

15.6. Changing Octavia default settings

If you want to override the default parameters that director uses to deploy Octavia, you can specify your values in one or more custom, YAML-formatted environment files (for example, octavia-environment.yaml).

Important

As a best practice, always make Octavia configuration changes to the appropriate Heat templates and re-run Red Hat OpenStack Platform director. You risk losing ad hoc configuration changes when you manually change individual files and do not use director.

The parameters that director uses to deploy and configure Octavia are pretty straightforward. Here are a few examples:

  • OctaviaControlNetwork

    The name for the neutron network used for the amphora control network.

  • OctaviaControlSubnetCidr

    The subnet for amphora control subnet in CIDR form.

  • OctaviaMgmtPortDevName

    The name of the Octavia management network interface used for communication between octavia worker/health-manager with the amphora machine.

  • OctaviaConnectionLogging

    A Boolean that enables (true) and disables connection flow logging in load-balancing instances (amphorae). As the amphorae have log rotation on, logs are unlikely to fill up disks. When disabled, performance is marginally impacted.

For the list of Octavia parameters that director uses, consult the following file on the undercloud:

/usr/share/openstack-tripleo-heat-templates/deployment/octavia/octavia-deployment-config.j2.yaml

Your environment file must contain the keywords parameter_defaults:. Put your parameter value pairs after the parameter_defaults: keyword. Here is an example:

parameter_defaults:
    OctaviaMgmtPortDevName: "o-hm0"
    OctaviaControlNetwork: 'lb-mgmt-net'
    OctaviaControlSubnet: 'lb-mgmt-subnet'
    OctaviaControlSecurityGroup: 'lb-mgmt-sec-group'
    OctaviaControlSubnetCidr: '172.24.0.0/16'
    OctaviaControlSubnetGateway: '172.24.0.1'
    OctaviaControlSubnetPoolStart: '172.24.0.2'
    OctaviaControlSubnetPoolEnd: '172.24.255.254'
Tip

YAML files are extremely sensitive about where in the file a parameter is placed. Make sure that parameter_defaults: starts in the first column (no leading whitespace characters), and your parameter value pairs start in column five (each parameter has four whitespace characters in front of it).

15.7. Secure a load balancer with an access control list

You can use the Octavia API to create an access control list (ACL) to limit incoming traffic to a listener to a set of allowed source IP addresses. Any other incoming traffic is rejected.

Prerequisites

This document uses the following assumptions for demonstrating how to secure an Octavia load balancer:

  • The back-end servers, 192.0.2.10 and 192.0.2.11, are on subnet named private-subnet, and have been configured with a custom application on TCP port 80.
  • The subnet, public-subnet, is a shared external subnet created by the cloud operator that is reachable from the internet.
  • The load balancer is a basic load balancer that is accessible from the internet and distributes requests to the back-end servers.
  • The application on TCP port 80 is accessible to limited-source IP addresses (192.0.2.0/24 and 198.51.100/24).

Procedure

  1. Create a load balancer (lb1) on the subnet (public-subnet):

    Note

    Replace the names in parentheses () with names that your site uses.

    $ openstack loadbalancer create --name lb1 --vip-subnet-id public-subnet
  2. Re-run the following command until the load balancer (lb1) shows a status that is ACTIVE and ONLINE:

    $ openstack loadbalancer show lb1
  3. Create listener (listener1) with the allowed CIDRs:

    $ openstack loadbalancer listener create --name listener1 --protocol TCP --protocol-port 80 --allowed-cidr 192.0.2.0/24 --allowed-cidr 198.51.100/24 lb1
  4. Create the default pool (pool1) for the listener (listener1):

    $ openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol TCP
  5. Add members 192.0.2.10 and 192.0.2.11 on the subnet (private-subnet) to the pool (pool1) that you created:

    $ openstack loadbalancer member create --subnet-id private-subnet --address 192.0.2.10 --protocol-port 80 pool1
    
    $ openstack loadbalancer member create --subnet-id private-subnet --address 192.0.2.11 --protocol-port 80 pool1

Verification steps

  1. Enter the following command:

    $ openstack loadbalancer listener show listener1

    You should see output similar to the following:

    +-----------------------------+--------------------------------------+
    | Field                       | Value                                |
    +-----------------------------+--------------------------------------+
    | admin_state_up              | True                                 |
    | connection_limit            | -1                                   |
    | created_at                  | 2019-12-09T11:38:05                  |
    | default_pool_id             | None                                 |
    | default_tls_container_ref   | None                                 |
    | description                 |                                      |
    | id                          | d26ba156-03c3-4051-86e8-f8997a202d8e |
    | insert_headers              | None                                 |
    | l7policies                  |                                      |
    | loadbalancers               | 2281487a-54b9-4c2a-8d95-37262ec679d6 |
    | name                        | listener1                            |
    | operating_status            | ONLINE                               |
    | project_id                  | 308ca9f600064f2a8b3be2d57227ef8f     |
    | protocol                    | TCP                                  |
    | protocol_port               | 80                                   |
    | provisioning_status         | ACTIVE                               |
    | sni_container_refs          | []                                   |
    | timeout_client_data         | 50000                                |
    | timeout_member_connect      | 5000                                 |
    | timeout_member_data         | 50000                                |
    | timeout_tcp_inspect         | 0                                    |
    | updated_at                  | 2019-12-09T11:38:14                  |
    | client_ca_tls_container_ref | None                                 |
    | client_authentication       | NONE                                 |
    | client_crl_container_ref    | None                                 |
    | allowed_cidrs               | 192.0.2.0/24                         |
    |                             | 198.51.100/24                        |
    +-----------------------------+--------------------------------------+

    The parameter, allowed_cidrs, is set to allow traffic only from 192.0.2.0/24 and 198.51.100/24.

  2. To verify that the load balancer is secure, try to make a request to the listener from a client whose CIDR is not in the allowed_cidrs list; the request should not succeed. You should see output similar to the following:

    curl: (7) Failed to connect to 10.0.0.226 port 80: Connection timed out
    curl: (7) Failed to connect to 10.0.0.226 port 80: Connection timed out
    curl: (7) Failed to connect to 10.0.0.226 port 80: Connection timed out
    curl: (7) Failed to connect to 10.0.0.226 port 80: Connection timed out

15.8. Configuring an HTTP load balancer

To configure a simple HTTP load balancer, complete the following steps:

  1. Create the load balancer on a subnet:

    $ openstack loadbalancer create --name lb1 --vip-subnet-id private-subnet
  2. Monitor the state of the load balancer:

    $ openstack loadbalancer show lb1

    When you see a status of Active and ONLINE, the load balancer is created and running and you can go to the next step.

    Note

    To check load balancer status from the Compute service (nova), use the openstack server list --all | grep amphora command. Creating load balancers can be a slow process (status displaying as PENDING) because load balancers are virtual machines (VMs) and not containers.

  3. Create a listener:

    $ openstack loadbalancer listener create --name listener1 --protocol HTTP --protocol-port 80 lb1
  4. Create the listener default pool:

    $ openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP
  5. Create a health monitor on the pool to test the “/healthcheck” path:

    $ openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type HTTP --url-path /healthcheck pool1
  6. Add load balancer members to the pool:

    $ openstack loadbalancer member create --subnet-id private-subnet --address 192.0.2.10 --protocol-port 80 pool1
    $ openstack loadbalancer member create --subnet-id private-subnet --address 192.0.2.11 --protocol-port 80 pool1
  7. Create a floating IP address on a public subnet:

    $ openstack floating ip create public
  8. Associate this floating IP with the load balancer VIP port:

    $ openstack floating ip set --port `_LOAD_BALANCER_VIP_PORT_` `_FLOATING_IP_`
    Tip

    To locate LOAD_BALANCER_VIP_PORT, run the openstack loadbalancer show lb1 command.

15.9. Verifying the load balancer

To verify the load balancer, complete the following steps:

  1. Run the openstack loadbalancer show command to verify the load balancer settings:

    (overcloud) [stack@undercloud-0 ~]$ openstack loadbalancer show lb1
    +---------------------+--------------------------------------+
    | Field               | Value                                |
    +---------------------+--------------------------------------+
    | admin_state_up      | True                                 |
    | created_at          | 2018-04-18T12:28:34                  |
    | description         |                                      |
    | flavor              |                                      |
    | id                  | 788fe121-3dec-4e1b-8360-4020642238b0 |
    | listeners           | 09f28053-fde8-4c78-88b9-0f191d84120e |
    | name                | lb1                                  |
    | operating_status    | ONLINE                               |
    | pools               | 627842b3-eed8-4f5f-9f4a-01a738e64d6a |
    | project_id          | dda678ca5b1241e7ad7bf7eb211a2fd7     |
    | provider            | octavia                              |
    | provisioning_status | ACTIVE                               |
    | updated_at          | 2018-04-18T14:03:09                  |
    | vip_address         | 192.168.0.11                         |
    | vip_network_id      | 9bca13be-f18d-49a5-a83d-9d487827fd16 |
    | vip_port_id         | 69a85edd-5b1c-458f-96f2-b4552b15b8e6 |
    | vip_qos_policy_id   | None                                 |
    | vip_subnet_id       | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 |
    +---------------------+--------------------------------------+
  2. Run the amphora list command to find the UUID of the amphora associated with load balancer lb1:

    (overcloud) [stack@undercloud-0 ~]$ openstack loadbalancer amphora list | grep <UUID of loadbalancer lb1>
  3. Run the amphora show command with the amphora UUID to view amphora information:

    (overcloud) [stack@undercloud-0 ~]$ openstack loadbalancer amphora show 62e41d30-1484-4e50-851c-7ab6e16b88d0
    +-----------------+--------------------------------------+
    | Field           | Value                                |
    +-----------------+--------------------------------------+
    | id              | 62e41d30-1484-4e50-851c-7ab6e16b88d0 |
    | loadbalancer_id | 53a497b3-267d-4abc-968f-94237829f78f |
    | compute_id      | 364efdb9-679c-4af4-a80c-bfcb74fc0563 |
    | lb_network_ip   | 192.168.0.13                         |
    | vrrp_ip         | 10.0.0.11                            |
    | ha_ip           | 10.0.0.10                            |
    | vrrp_port_id    | 74a5c1b4-a414-46b8-9263-6328d34994d4 |
    | ha_port_id      | 3223e987-5dd6-4ec8-9fb8-ee34e63eef3c |
    | cert_expiration | 2020-07-16T12:26:07                  |
    | cert_busy       | False                                |
    | role            | BACKUP                               |
    | status          | ALLOCATED                            |
    | vrrp_interface  | eth1                                 |
    | vrrp_id         | 1                                    |
    | vrrp_priority   | 90                                   |
    | cached_zone     | nova                                 |
    | created_at      | 2018-07-17T12:26:07                  |
    | updated_at      | 2018-07-17T12:30:36                  |
    | image_id        | a3f9f3e4-92b6-4a27-91c8-ddc69714da8f |
    +-----------------+--------------------------------------+
  4. Run the openstack loadbalancer listener show command to view the listener details:

    (overcloud) [stack@undercloud-0 ~]$ openstack loadbalancer listener show listener1
    ---------------------------------------------------------------------------------------------------+
    | Field                     | Value                                                                  |
    ---------------------------------------------------------------------------------------------------+
    | admin_state_up            | True                                                                   |
    | connection_limit          | -1                                                                     |
    | created_at                | 2018-04-18T12:51:25                                                    |
    | default_pool_id           | 627842b3-eed8-4f5f-9f4a-01a738e64d6a                                   |
    | default_tls_container_ref | http://10.0.0.101:9311/v1/secrets/7eafeabb-b4a1-4bc4-8098-b6281736bfe2 |
    | description               |                                                                        |
    | id                        | 09f28053-fde8-4c78-88b9-0f191d84120e                                   |
    | insert_headers            | None                                                                   |
    | l7policies                |                                                                        |
    | loadbalancers             | 788fe121-3dec-4e1b-8360-4020642238b0                                   |
    | name                      | listener1                                                              |
    | operating_status          | ONLINE                                                                 |
    | project_id                | dda678ca5b1241e7ad7bf7eb211a2fd7                                       |
    | protocol                  | TERMINATED_HTTPS                                                       |
    | protocol_port             | 443                                                                    |
    | provisioning_status       | ACTIVE                                                                 |
    | sni_container_refs        | []                                                                     |
    | updated_at                | 2018-04-18T14:03:09                                                    |
    ---------------------------------------------------------------------------------------------------+
  5. Run the openstack loadbalancer pool show command to view the pool and load-balancer members:

    (overcloud) [stack@undercloud-0 ~]$ openstack loadbalancer pool show pool1
    +---------------------+--------------------------------------+
    | Field               | Value                                |
    +---------------------+--------------------------------------+
    | admin_state_up      | True                                 |
    | created_at          | 2018-04-18T12:53:49                  |
    | description         |                                      |
    | healthmonitor_id    |                                      |
    | id                  | 627842b3-eed8-4f5f-9f4a-01a738e64d6a |
    | lb_algorithm        | ROUND_ROBIN                          |
    | listeners           | 09f28053-fde8-4c78-88b9-0f191d84120e |
    | loadbalancers       | 788fe121-3dec-4e1b-8360-4020642238b0 |
    | members             | b85c807e-4d7c-4cbd-b725-5e8afddf80d2 |
    |                     | 40db746d-063e-4620-96ee-943dcd351b37 |
    | name                | pool1                                |
    | operating_status    | ONLINE                               |
    | project_id          | dda678ca5b1241e7ad7bf7eb211a2fd7     |
    | protocol            | HTTP                                 |
    | provisioning_status | ACTIVE                               |
    | session_persistence | None                                 |
    | updated_at          | 2018-04-18T14:03:09                  |
    +---------------------+--------------------------------------+
  6. Run the openstack floating ip list command to verify the floating IP address:

    (overcloud) [stack@undercloud-0 ~]$ openstack floating ip list
    -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    | ID                                   | Floating IP Address | Fixed IP Address | Port                                 | Floating Network                     | Project                          |
    -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    | 89661971-fa65-4fa6-b639-563967a383e7 | 10.0.0.213          | 192.168.0.11     | 69a85edd-5b1c-458f-96f2-b4552b15b8e6 | fe0f3854-fcdc-4433-bc57-3e4568e4d944 | dda678ca5b1241e7ad7bf7eb211a2fd7 |
    -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
  7. Verify HTTPS traffic flows across the load balancer:

    (overcloud) [stack@undercloud-0 ~]$ curl -v https://10.0.0.213 --insecure
    * About to connect() to 10.0.0.213 port 443 (#0)
    *   Trying 10.0.0.213...
    * Connected to 10.0.0.213 (10.0.0.213) port 443 (#0)
    * Initializing NSS with certpath: sql:/etc/pki/nssdb
    * skipping SSL peer certificate verification
    * SSL connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
    * Server certificate:
    * 	subject: CN=www.example.com,O=Dis,L=Springfield,ST=Denial,C=US
    * 	start date: Apr 18 09:21:45 2018 GMT
    * 	expire date: Apr 18 09:21:45 2019 GMT
    * 	common name: www.example.com
    * 	issuer: CN=www.example.com,O=Dis,L=Springfield,ST=Denial,C=US
    > GET / HTTP/1.1
    > User-Agent: curl/7.29.0
    > Host: 10.0.0.213
    > Accept: */*
    >
    < HTTP/1.1 200 OK
    < Content-Length: 30
    <
    * Connection #0 to host 10.0.0.213 left intact

15.10. Overview of TLS-terminated HTTPS load balancer

When a TLS-terminated HTTPS load balancer is implemented, web clients communicate with the load balancer over Transport Layer Security (TLS) protocols. The load balancer terminates the TLS session and forwards the decrypted requests to the back end servers.

By terminating the TLS session on the load balancer, we offload the CPU-intensive encryption operations to the load balancer, and allows the load balancer to use advanced features such as Layer 7 inspection.

15.11. Creating a TLS-terminated HTTPS load balancer

This procedure describes how to configure a TLS-terminated HTTPS load balancer that is accessible from the internet through Transport Layer Security (TLS) and distributes requests to the back end servers over the non-encrypted HTTP protocol.

Prerequisites

  • A private subnet that contains back end servers that host non-secure HTTP applications on TCP port 80.
  • A shared external (public) subnet that is reachable from the internet.
  • TLS public-key cryptography has been configured with the following characteristics:

    • A TLS certificate, key, and intermediate certificate chain have been obtained from an external certificate authority (CA) for the DNS name assigned to the load balancer VIP address (for example, www.example.com).
    • The certificate, key, and intermediate certificate chain reside in separate files in the current directory.
    • The key and certificate are PEM-encoded.
    • The key is not encrypted with a passphrase.
    • The intermediate certificate chain is contains multiple certificates that are PEM-encoded and concatenated together.
  • You must configure the Load-balancing service (octavia) to use the Key Manager service (barbican). For more information, see the Manage Secrets with OpenStack Key Manager guide.

Procedure

  1. Combine the key (server.key), certificate (server.crt), and intermediate certificate chain (ca-chain.crt) into a single PKCS12 file (server.p12).

    Note

    The values in parentheses are provided as examples. Replace these with values appropriate for your site.

    $ openssl pkcs12 -export -inkey server.key -in server.crt -certfile ca-chain.crt -passout pass: -out server.p12
  2. Using the Key Manager service, create a secret resource (tls_secret1) for the PKCS12 file.

    $ openstack secret store --name='tls_secret1' -t 'application/octet-stream' -e 'base64' --payload="$(base64 < server.p12)"
  3. Create a load balancer (lb1) on the public subnet (public-subnet).

    $ openstack loadbalancer create --name lb1 --vip-subnet-id public-subnet
  4. Before you can proceed, the load balancer that you created (lb1) must be in an active and online state.

    Run the openstack loadbalancer show command, until the load balancer responds with an ACTIVE and ONLINE status. (You might need to run this command more than once.)

    $ openstack loadbalancer show lb1
  5. Create a TERMINATED_HTTPS listener (listener1), and reference the secret resource as the default TLS container for the listener.

    $ openstack loadbalancer listener create --protocol-port 443 --protocol TERMINATED_HTTPS --name listener1 --default-tls-container=$(openstack secret list | awk '/ tls_secret1 / {print $2}') lb1
  6. Create a pool (pool1) and make it the default pool for the listener.

    $ openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP
  7. Add the non-secure HTTP back end servers (192.0.2.10 and 192.0.2.11) on the private subnet (private-subnet) to the pool.

    $ openstack loadbalancer member create --subnet-id private-subnet --address 192.0.2.10 --protocol-port 80 pool1
    $ openstack loadbalancer member create --subnet-id private-subnet --address 192.0.2.11 --protocol-port 80 pool1

15.12. Creating a TLS-terminated HTTPS load balancer with SNI

This procedure describes how to configure a TLS-terminated HTTPS load balancer that is accessible from the internet through Transport Layer Security (TLS) and distributes requests to the back end servers over the non-encrypted HTTP protocol. In this configuration, there is one listener that contains multiple TLS certificates and implements Server Name Indication (SNI) technology.

Prerequisites

  • A private subnet that contains back end servers that host non-secure HTTP applications on TCP port 80.
  • A shared external (public) subnet that is reachable from the internet.
  • TLS public-key cryptography has been configured with the following characteristics:

    • Multiple TLS certificates, keys, and intermediate certificate chains have been obtained from an external certificate authority (CA) for the DNS names assigned to the load balancer VIP address (for example, www.example.com and www2.example.com).
    • The certificates, keys, and intermediate certificate chains reside in separate files in the current directory.
    • The keys and certificates are PEM-encoded.
    • The keys are not encrypted with passphrases.
    • The intermediate certificate chains contain multiple certificates that are PEM-encoded and are concatenated together.
  • You must configure the Load-balancing service (octavia) to use the Key Manager service (barbican). For more information, see the Manage Secrets with OpenStack Key Manager guide.

Procedure

  1. For each of the TLS certificates in the SNI list, combine the key (server.key), certificate (server.crt), and intermediate certificate chain (ca-chain.crt) into a single PKCS12 file (server.p12).

    In this example, you create two PKCS12 files (server.p12 and server2.p12) one for each certificate (www.example.com and www2.example.com).

    Note

    The values in parentheses are provided as examples. Replace these with values appropriate for your site.

    $ openssl pkcs12 -export -inkey server.key -in server.crt -certfile ca-chain.crt -passout pass: -out server.p12
    
    $ openssl pkcs12 -export -inkey server2.key -in server2.crt -certfile ca-chain2.crt -passout pass: -out server2.p12
  2. Using the Key Manager service, create secret resources (tls_secret1 and tls_secret2) for the PKCS12 file.

    $ openstack secret store --name='tls_secret1' -t 'application/octet-stream' -e 'base64' --payload="$(base64 < server.p12)"
    $ openstack secret store --name='tls_secret2' -t 'application/octet-stream' -e 'base64' --payload="$(base64 < server2.p12)"
  3. Create a load balancer (lb1) on the public subnet (public-subnet).

    $ openstack loadbalancer create --name lb1 --vip-subnet-id public-subnet
  4. Before you can proceed, the load balancer that you created (lb1) must be in an active and online state.

    Run the openstack loadbalancer show command, until the load balancer responds with an ACTIVE and ONLINE status. (You might need to run this command more than once.)

    $ openstack loadbalancer show lb1
  5. Create a TERMINATED_HTTPS listener (listener1), and reference both the secret resources using SNI.

    (Reference tls_secret1 as the default TLS container for the listener.)

    $ openstack loadbalancer listener create --protocol-port 443 \
    --protocol TERMINATED_HTTPS --name listener1 \
    --default-tls-container=$(openstack secret list | awk '/ tls_secret1 / {print $2}') \
    --sni-container-refs $(openstack secret list | awk '/ tls_secret1 / {print $2}') \
    $(openstack secret list | awk '/ tls_secret2 / {print $2}') -- lb1
  6. Create a pool (pool1) and make it the default pool for the listener.

    $ openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP
  7. Add the non-secure HTTP back end servers (192.0.2.10 and 192.0.2.11) on the private subnet (private-subnet) to the pool.

    $ openstack loadbalancer member create --subnet-id private-subnet --address 192.0.2.10 --protocol-port 80 pool1
    $ openstack loadbalancer member create --subnet-id private-subnet --address 192.0.2.11 --protocol-port 80 pool1

15.13. Creating HTTP and TLS-terminated HTTPS load balancers on the same back end

This procedure describes how to configure a non-secure listener and a TLS-terminated HTTPS listener on the same load balancer (and same IP address). You would use this configuration when you want to respond to web clients with the exact same content, regardless if the client is connected with secure or non-secure HTTP protocol.

Prerequisites

  • A private subnet that contains back end servers that host non-secure HTTP applications on TCP port 80.
  • A shared external (public) subnet that is reachable from the internet.
  • TLS public-key cryptography has been configured with the following characteristics:

    • A TLS certificate, key, and optional intermediate certificate chain have been obtained from an external certificate authority (CA) for the DNS name assigned to the load balancer VIP address (for example, www.example.com).
    • The certificate, key, and intermediate certificate chain reside in separate files in the current directory.
    • The key and certificate are PEM-encoded.
    • The key is not encrypted with a passphrase.
    • The intermediate certificate chain is contains multiple certificates that are PEM-encoded and concatenated together.
  • You must configure the Load-balancing service (octavia) to use the Key Manager service (barbican). For more information, see the Manage Secrets with OpenStack Key Manager guide.
  • The non-secure HTTP listener is configured with the same pool as the HTTPS TLS-terminated load balancer.

Procedure

  1. Combine the key (server.key), certificate (server.crt), and intermediate certificate chain (ca-chain.crt) into a single PKCS12 file (server.p12).

    Note

    The values in parentheses are provided as examples. Replace these with values appropriate for your site.

    $ openssl pkcs12 -export -inkey server.key -in server.crt -certfile ca-chain.crt -passout pass: -out server.p12
  2. Using the Key Manager service, create a secret resource (tls_secret1) for the PKCS12 file.

    $ openstack secret store --name='tls_secret1' -t 'application/octet-stream' -e 'base64' --payload="$(base64 < server.p12)"
  3. Create a load balancer (lb1) on the public subnet (public-subnet).

    $ openstack loadbalancer create --name lb1 --vip-subnet-id public-subnet
  4. Before you can proceed, the load balancer that you created (lb1) must be in an active and online state.

    Run the openstack loadbalancer show command, until the load balancer responds with an ACTIVE and ONLINE status. (You might need to run this command more than once.)

    $ openstack loadbalancer show lb1
  5. Create a TERMINATED_HTTPS listener (listener1), and reference the secret resource as the default TLS container for the listener.

    $ openstack loadbalancer listener create --protocol-port 443 --protocol TERMINATED_HTTPS --name listener1 --default-tls-container=$(openstack secret list | awk '/ tls_secret1 / {print $2}') lb1
  6. Create a pool (pool1) and make it the default pool for the listener.

    $ openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP
  7. Add the non-secure HTTP back end servers (192.0.2.10 and 192.0.2.11) on the private subnet (private-subnet) to the pool.

    $ openstack loadbalancer member create --subnet-id private-subnet --address 192.0.2.10 --protocol-port 80 pool1
    $ openstack loadbalancer member create --subnet-id private-subnet --address 192.0.2.11 --protocol-port 80 pool1
  8. Create a non-secure, HTTP listener (listener2), and make its default pool, the same as the secure listener.

    $ openstack loadbalancer listener create --protocol-port 80 --protocol HTTP --name listener2 --default-pool pool1 lb1

15.14. Accessing Amphora logs

Amphora is the instance that performs load balancing. You can view Amphora logging information in the systemd journal.

  1. Start the ssh-agent, and add your user identity key to the agent:

    [stack@undercloud-0] $ eval `ssh-agent -s`
    [stack@undercloud-0] $ ssh-add
  2. Use SSH to connect to the Amphora instance:

    [stack@undercloud-0] $ ssh -A -t heat-admin@<controller node IP address> ssh cloud-user@<IP address of Amphora in load-balancing management network>
  3. View the systemd journal:

    [cloud-user@amphora-f60af64d-570f-4461-b80a-0f1f8ab0c422 ~] $ sudo journalctl

    Refer to the journalctl man page for information about filtering journal output.

  4. When you are finished viewing the journal, and have closed your connections to the Amphora instance and the Controller node, make sure that you stop the SSH agent:

    [stack@undercloud-0] $ exit

15.15. Updating running amphora instances

15.15.1. Overview

Periodically, you must update a running load balancing instance (amphora) with a newer image. For example, you must update an amphora instance during the following events:

  • An update or upgrade of Red Hat OpenStack Platform.
  • A security update to your system.
  • A change to a different flavor for the underlying virtual machine.

To update an amphora image, you must fail over the load balancer and then wait for the load balancer to regain an active state. When the load balancer is again active, it is running the new image.

15.15.2. Prerequisites

New images for amphora are available during an OpenStack update or upgrade.

15.15.3. Update amphora instances with new images

During an OpenStack update or upgrade, director automatically downloads the default amphora image, uploads it to the overcloud Image service (glance), and then configures Octavia to use the new image. When you failover the load balancer, you are forcing Octavia to start the new amphora image.

  1. Make sure that you have reviewed the prerequisites before you begin updating amphora.
  2. List the IDs for all of the load balancers that you want to update:

    $ openstack loadbalancer list -c id -f value
  3. Fail over each load balancer:

    $ openstack loadbalancer failover <loadbalancer_id>
    Note

    When you start failing over the load balancers, monitor system utilization, and as needed, adjust the rate at which you perform failovers. A load balancer failover creates new virtual machines and ports, which might temporarily increase the load on OpenStack Networking.

  4. Monitor the state of the failed over load balancer:

    $ openstack loadbalancer show <loadbalancer_id>

    The update is complete when the load balancer status is ACTIVE.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.