Configuring load balancing as a service
Configuring the Load-balancing service (octavia) to manage network traffic across the data plane in a Red Hat OpenStack Services on OpenShift environment
Abstract
Providing feedback on Red Hat documentation
We appreciate your input on our documentation. Tell us how we can make it better.
Providing documentation feedback in Jira
Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of Red Hat OpenStack Platform (RHOSP). When you create an issue for RHOSO or RHOSP documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback.
To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com.
- Click the following link to open a Create Issue page: Create Issue
- Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form.
- Click Create.
The content of this guide is a Technology Preview
This content in this guide is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
Chapter 1. Introduction to the Load-balancing service
This content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
The Load-balancing service (octavia) provides a Load Balancing-as-a-Service (LBaaS) API version 2 implementation for Red Hat OpenStack Services on OpenShift (RHOSO) environments. The Load-balancing service manages multiple virtual machines, containers, or bare metal servers—collectively known as amphorae—which it launches on demand. The ability to provide on-demand, horizontal scaling makes the Load-balancing service a fully-featured load balancer that is appropriate for large RHOSO enterprise deployments.
1.1. Load-balancing service components
The Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) uses a set of VM instances referred to as amphorae that reside on the Compute nodes. The Load-balancing service controllers communicate with the amphorae over a load-balancing management network (lb-mgmt-net
).
When using octavia, you can create load-balancer virtual IPs (VIPs) that do not require floating IPs (FIPs). Not using FIPs has the advantage of improving performance through the load balancer.
Figure 1.1. Load-balancing service components
Figure 1.1 shows the components of the Load-balancing service are hosted on the same nodes as the Networking API server, which by default, is on the Red Hat OpenShift worker nodes that host the RHOSO control plane. The Load-balancing service consists of the following components:
- Octavia API (
octavia-api
pods) - Provides the REST API for users to interact with octavia.
- Controller Worker (
octavia-worker
pods) - Sends configuration and configuration updates to amphorae over the load-balancing management network.
- Health Manager (
octavia-healthmanager
pods) - Monitors the health of individual amphorae and handles failover events if an amphora encounters a failure.
- Housekeeping Manager (
octavia-housekeeping
pods) - Cleans up deleted database records, and manages amphora certificate rotation.
- Driver agent (included within the
octavia-api
pods) - Supports other provider drivers, such as OVN.
- Amphora
- Performs the load balancing. Amphorae are typically instances that run on Compute nodes that you configure with load balancing parameters according to the listener, pool, health monitor, L7 policies, and members' configuration. Amphorae send a periodic heartbeat to the Health Manager.
1.2. Load-balancing service object model
The Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) uses a typical load-balancing object model.
Figure 1.2. Load-balancing service object model diagram
- Load balancer
- The top API object that represents the load-balancing entity. The VIP address is allocated when you create the load balancer. When you use the amphora provider to create the load balancer one or more amphora instances launch on one or more Compute nodes.
- Listener
- The port on which the load balancer listens, for example, TCP port 80 for HTTP. Listeners also support TLS-terminated HTTPS load balancers.
- Health Monitor
- A process that performs periodic health checks on each back-end member server to pre-emptively detect failed servers and temporarily remove them from the pool.
- Pool
- A group of members that handle client requests from the load balancer. You can associate pools with more than one listener by using the API. You can share pools with L7 policies.
- Member
- Describes how to connect to the back-end instances or services. This description consists of the IP address and network port on which the back end member is available.
- L7 Rule
- Defines the layer 7 (L7) conditions that determine whether an L7 policy applies to the connection.
- L7 Policy
- A collection of L7 rules associated with a listener, and which might also have an association to a back-end pool. Policies describe actions that the load balancer takes if all of the rules in the policy are true.
Additional resources
1.3. Uses of load balancing in RHOSO
Load balancing is essential for enabling simple or automatic delivery scaling and availability for cloud deployments. The Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) depends on other RHOSO services:
- Compute service (nova) - For managing the Load-balancing service VM instance (amphora) lifecycle, and creating compute resources on demand.
- Networking service (neutron) - For network connectivity between amphorae, tenant environments, and external networks.
- Key Manager service (barbican) - For managing TLS certificates and credentials, when TLS session termination is configured on a listener.
- Identity service (keystone) - For authentication requests to the octavia API, and for the Load-balancing service to authenticate with other RHOSO services.
- Image service (glance) - For storing the amphora virtual machine image.
The Load-balancing service interacts with the other RHOSO services through a driver interface. The driver interface avoids major restructuring of the Load-balancing service if an external component requires replacement with a functionally-equivalent service.
Chapter 2. Considerations for implementing the Load-balancing service
This content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
You must make several decisions when you plan to deploy the Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) such as choosing which provider to use or whether to implement a highly available environment:
2.1. Load-balancing service provider drivers
The Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) supports enabling multiple provider drivers by using the Octavia v2 API. You can choose to use one provider driver, or multiple provider drivers simultaneously.
RHOSO provides two load-balancing providers, amphora and Open Virtual Network (OVN).
Amphora, the default, is a highly available load balancer with a feature set that scales with your compute environment. Because of this, amphora is suited for large-scale deployments.
The OVN load-balancing provider is a lightweight load balancer with a basic feature set. OVN is typical for east-west, layer 4 network traffic. OVN provisions quickly and consumes fewer resources than a full-featured load-balancing provider such as amphora.
The information in this section applies only to the amphora load-balancing provider, unless indicated otherwise.
Additional resources
2.2. Load-balancing service (octavia) feature support matrix
The Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) provides two load-balancing providers, amphora and Open Virtual Network (OVN).
Amphora is a full-featured load-balancing provider that requires a separate haproxy VM and an extra latency hop.
OVN runs on every node and does not require a separate VM nor an extra hop. However, OVN has far fewer load-balancing features than amphora.
The following table lists features in the Load-balancing service that 18.0 supports and in which maintenance release support for the feature began.
If the feature is not listed, then RHOSO 18.0 does not support the feature.
Feature | Support level in RHOSO 18.0 | |
Amphora Provider | OVN Provider | |
ML2/OVN L3 HA | Full support | Full support |
ML2/OVN DVR | Full support | Full support |
DPDK | No support | No support |
SR-IOV | No support | No support |
Health monitors | Full support | No support |
Amphora active-standby | Full support | No support |
Terminated HTTPS load balancers (with barbican) | Full support | No support |
UDP | Full support | Full support |
Backup members | Technology Preview only | No support |
TLS client authentication | Technology Preview only | No support |
TLS back end encryption | Technology Preview only | No support |
Octavia flavors | Full support | No support |
Object tags | Full support | Full support |
Listener API timeouts | Full support | No support |
Log offloading | Future release | No support |
VIP access control list | Full support | No support |
Availability zones | Full support | No support |
Volume-based amphora | No support | No support |
Additional resources
2.3. Load-balancing service software requirements
The Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) requires that you configure the following core OpenStack components:
- Compute (nova)
- OpenStack Networking (neutron)
- Image (glance)
- Identity (keystone)
- MariaDB
2.4. Basics of active-standby topology for Load-balancing service instances
When you deploy the Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia), you can decide whether, by default, load balancers are highly available when users create them. If you want to give users a choice, then after RHOSO deployment, create a Load-balancing service flavor for creating highly available load balancers and a flavor for creating standalone load balancers.
By default, the amphora provider driver is configured for a single Load-balancing service (amphora) instance topology with limited support for high availability (HA). However, you can make Load-balancing service instances highly available when you implement an active-standby topology.
In this topology, the Load-balancing service boots an active and standby amphora instance for each load balancer, and maintains session persistence between each. If the active instance becomes unhealthy, the instance automatically fails over to the standby instance, making it active. The Load-balancing service health manager automatically rebuilds an instance that fails.
Chapter 3. Deploying the Load-balancing service in an existing environment
This content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
Deploying the Load-balancing service (octavia) to an existing Red Hat OpenStack Services on OpenShift (RHOSO) environment consists of performing the required steps for network configuration and security and then deploying the Load-balancing service in the RHOSO control plane.
Overview
You must perform the steps in the following procedures to deploy the Load-balancing service (octavia):
The steps in these procedures provide sample values that you add to the required CRs. The actual values that you provide will depend on your particular hardware configuration and local networking policies.
3.1. Adding the Load-balancing service interface to the configuration policy
You begin the process of deploying the Load-balancing service (octavia) in a pre-existing Red Hat OpenStack Services on OpenShift (RHOSO) environment by adding the required interface to the node network configuration policy (nncp
).
Prerequisites
-
You have the
oc
command line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-admin
privileges. -
You have a unique VLAN ID for the load-balancing management network,
lb-mgmt-net
.
Procedure
Add the VLAN interface as a port to the
octbr
bridge.Performing this step enables pods connected to the
octavia
network attachment to communicate with pods running on other worker nodes. Because the interface is a VLAN, it isolates the load-balancing management network from other networks that might share the same base interface.ExampleIn this example, the base interface name used is
enp6s0
and the VLAN ID used is24
. Replace these values with ones that are appropriate for your environment. For more information, see Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift.oc get -n openstack --no-headers nncp | cut -f 1 -d ' ' | while read ; do oc patch -n openstack nncp $REPLY --type=merge --patch ' spec: desiredState: interfaces: - description: Octavia vlan host interface name: enp6s0.24 state: up type: vlan vlan: base-iface: enp6s0 id: 24 - bridge: options: stp: enabled: false port: - name: enp6s0.24 description: Configuring bridge octbr mtu: 1500 name: octbr state: up type: linux-bridge ' done
Validation
Confirm that the interface was successfully added, by running the following command:
$ oc get nncp -n openstack
Sample output
When successful, you should see output similar to the following:
NAME STATUS REASON worker-0 Available SuccessfullyConfigured worker-1 Available SuccessfullyConfigured worker-2 Available SuccessfullyConfigured
3.2. Adding the network attachment definition of the load-balancing management network
Adding the network attachment definition for the load-balancing management network is a required step for deploying the Load-balancing service (octavia) in a Red Hat OpenStack Services on OpenShift (RHOSO) environment.
The octavia
network attachment is required to connect pods that manage load balancer virtual machines (amphorae) and the Open vSwitch pods that are managed by the OVN operator. RHOSO uses the podified Open vSwitch instance to implement the route between the provider network and the management project (tenant) network. This attachment must be a bridgeable interface in the Open vSwitch pod, and must permit communication between other pods on the same node. The bridge
attachment type, in conjunction with the VLAN interface added to the bridge in the NodeNetworkConfigurationPolicy
, creates the necessary layer 2 link that enables connectivity across the nodes.
Prerequisites
-
You have the
oc
command line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-admin
privileges.
Procedure
Add the network attachment definition.
Example
In the following example, a network attachment definition is added to a definition file named,
octavia-net-attach-def.yaml
.ImportantThe IP addresses and CIDRs defined in the network attachment definition are used only on a private network and VLAN. The values shown in this example are only for demonstration purposes. Use values that are appropriate for your environment.
cat >> octavia-net-attach-def.yaml << EOF_CAT apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: labels: osp/net: octavia name: octavia namespace: openstack spec: config: | { "cniVersion": "0.3.1", "name": "octavia", "type": "bridge", "bridge": "octbr", "ipam": { "type": "whereabouts", "range": "192.0.2.0/16", "range_start": "192.0.2.30", "range_end": "192.0.2.70", "routes": [ { "dst": "192.0.2.0/16", "gw" : "192.0.2.150" } ] } } EOF_CAT oc apply -n openstack -f octavia-net-attach-def.yaml
Verification
Confirm that the network attachment for the load-balancing management network is successful.
oc get net-attach-def -n openstack
Sample output
When successful, you should see
octavia
displayed in the network name list:NAME AGE ctlplane 25m datacentre 25m internalapi 25m octavia 25m storage 25m tenant 25m
Next steps
3.3. Creating a CA passphrase for certificate generation and signing
In Red Hat OpenStack Services on OpenShift (RHOSO) environments, you create a Secret
custom resource (CR) which is used to encrypt the generated private key of the Server CA. RHOSO uses dual CAs to make communication between the Load balancing service (octavia) amphora and its controller more secure.
Prerequisites
-
You have the
oc
command line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-admin
privileges.
Procedure
Generate a Base64, encoded password.
Retain the encoded output to use in a later step.
Example
In this example, the password,
my_password
is encoded using the Base64 encoding scheme:$ echo -n my_password | base64
-
Create a
Secret
CR file on your workstation, for example,octavia-ca-passphrase.yaml
. Add the following configuration to
octavia-ca-passphrase.yaml
:apiVersion: v1 data: server-ca-passphrase: <Base64_password> kind: Secret metadata: name: octavia-ca-passphrase namespace: openstack type: Opaque
- Replace the <Base64_password> with the Base64-encoded password that you created earlier.
Create the
Secret
CR in the cluster:$ oc create -f octavia-ca-passphrase.yaml
Verification
Confirm that the
Secret
CR exists:$ oc describe secret octavia-ca-passphrase -n openstack
Next steps
3.4. Deploying the Load-balancing service
To deploy the Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia), you must configure the OVN controller to create a NIC mapping for the provider network as well as add it to the networkAttachments
property for each Load-balancing service that controls load balancers (amphorae).
Prerequisites
-
You have the
oc
command line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-admin
privileges.
Procedure
Update the
OpenStackControlPlane
custom resource with the required values for the Load-balancing service.Important-
In the following example,
controlplane
is the name of theOpenStackControlPlane
custom resource (CR). Use the correct name for yourOpenStackControlPlane
CR. -
The value for
nicMappings
must beoctavia: octbr
.
oc patch -n openstack openstackcontrolplane controlplane --type=merge --patch ' spec: ovn: template: ovnController: nicMappings: octavia: octbr octavia: enabled: true template: octaviaHousekeeping: networkAttachments: - octavia octaviaHealthManager: networkAttachments: - octavia octaviaWorker: networkAttachments: - octavia '
-
In the following example,
Verification
Confirm that the Load-balancing service (octavia) pods are running:
$ oc get pods | grep octavia
Sample output
You should see output similar to the following. The number of entries and their suffixes will vary depending on the details of your environment:
octavia-api-5cf9bc78f7-4lmds 2/2 Running 0 42h octavia-healthmanager-5g94j 1/1 Running 0 21h octavia-housekeeping-5gtw8 1/1 Running 0 21h octavia-image-upload-78b4b6c47c-xzdtl 1/1 Running 0 35h octavia-worker-pq55m 1/1 Running 0 21h
Access the remote shell for the OpenStackClient pod from your workstation:
$ oc rsh -n openstack openstackclient
Confirm that the networks
octavia-provider-net
andlb-mgmt-net
are present:$ openstack network list -f yaml
Sample output
- ID: 2e4fc309-546b-4ac8-9eae-aa8d70a27a9b Name: octavia-provider-net Subnets: - eea45073-6e56-47fd-9153-12f7f49bc115 - ID: 77881d3f-04b0-46cb-931f-d54003cce9f0 Name: lb-mgmt-net Subnets: - e4ab96af-8077-4971-baa4-e0d40a16f55a
The network,
octavia-provider-net
, is the external provider network, and is limited to the RHOSO control plane. Thelb-mgmt-net
network connects the Load-balancing service to amphora instances.Exit the
openstackclient
pod:$ exit
Chapter 4. Monitoring the Load-balancing service
This content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
In Red Hat OpenStack Services on OpenShift (RHOSO) environments, to keep load balancing operational, you can use the load-balancer management network and create, modify, and delete load-balancing health monitors:
- Section 4.1, “The Load-balancing service management network”
- Section 4.2, “Load-balancing service instance monitoring”
- Section 4.3, “Load-balancing service pool member monitoring”
- Section 4.4, “Load balancer provisioning status monitoring”
- Section 4.5, “Load balancer functionality monitoring”
- Section 4.6, “About Load-balancing service health monitors”
- Section 4.7, “Creating Load-balancing service health monitors”
- Section 4.8, “Modifying Load-balancing service health monitors”
- Section 4.9, “Deleting Load-balancing service health monitors”
- Section 4.10, “Best practices for Load-balancing service HTTP health monitors”
4.1. The Load-balancing service management network
The Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) controller pods require network connectivity across the OpenStack cloud in order to monitor and manage amphora load-balancer virtual machines (VMs). The Load-balancing service management network is actually two OpenStack networks: a project (tenant) network that is connected to the amphora VMs; and a provider network connecting Load-balancing service controllers running in the podified control plane through a network defined by a Red Hat OpenShift network attachment. An OpenStack router routes packets between the project network and the provider network with both the control plane pods and load balancer VMs having routes configured to direct traffic through the router for those networks.
Figure 4.1. Control plane networking for the Load-balancing service
- GENEVE tunnel connection.
- Patch ports.
-
The Octavia Operator implements NIC mappings by adding the octavia network attachment to the
br-octavia
bridge. -
The project connection is configured through the
ovn-controller
NetworkAttachment
property. The provider network attachment is added using the
NicMapping
property.The
NicMapping
property instructs the OVN operator to configure a bridge mapping for the network attachment, allowing a provider network to be created that uses the attachment as the physical interface.-
The octavia network attachment is also added to each type of octavia amphora controller pod: housekeeping, health manager, and worker, using the
Networkattachment
property for each. -
The octavia network attachments are a
bridge
type attachment, configured withoctbr
as the bridge name. The
NodeNetworkConfigurationPolicy
defines a VLAN interface similar to what is used for the internal API, storage networks, and so on.The policy also initially creates the
octbr
bridge and adds the octavia VLAN interface to it. These actions enable connections across the OpenShift nodes and isolates the network traffic from other VLAN and non-VLAN networks.
4.2. Load-balancing service instance monitoring
In Red Hat OpenStack Services on OpenShift (RHOSO) environments, the Load-balancing service (octavia) monitors the load balancing instances (amphorae) and initiates failovers and replacements if the amphorae malfunction. Any time a failover occurs, the Load-balancing service logs the failover in the corresponding health manager log on the controller in /var/log/containers/octavia
.
Use log analytics to monitor failover trends to address problems early. Problems such as Networking service (neutron) connectivity issues, Denial of Service attacks, and Compute service (nova) malfunctions often lead to higher failover rates for load balancers.
4.3. Load-balancing service pool member monitoring
In Red Hat OpenStack Services on OpenShift (RHOSO) environments, the Load-balancing service (octavia) uses the health information from the underlying load balancing subsystems to determine the health of members of the load-balancing pool. Health information is streamed to the Load-balancing service database, and made available by the status tree or other API methods. For critical applications, you must poll for health information in regular intervals.
4.4. Load balancer provisioning status monitoring
In Red Hat OpenStack Services on OpenShift (RHOSO) environments, you can monitor the provisioning status of a load balancer and send alerts if the provisioning status is ERROR
. Do not configure an alert to trigger when an application is making regular changes to the pool and enters several PENDING
stages.
The provisioning status of load balancer objects reflect the ability of the control plane to contact and successfully provision a create, update, and delete request. The operating status of a load balancer object reports on the current functionality of the load balancer.
For example, a load balancer might have a provisioning status of ERROR
, but an operating status of ONLINE
. This might be caused by a Networking service (neutron) failure that blocked that last requested update to the load balancer configuration from successfully completing. In this case, the load balancer continues to process traffic through the load balancer, but might not have applied the latest configuration updates yet.
4.5. Load balancer functionality monitoring
You can monitor the operational status of your load balancer and its child objects in your Red Hat OpenStack Services on OpenShift (RHOSO) environment.
You can also use an external monitoring service that connects to your load balancer listeners and monitors them from outside of the cloud. An external monitoring service indicates if there is a failure outside of the Load-balancing service (octavia) that might impact the functionality of your load balancer, such as router failures, network connectivity issues, and so on.
4.6. About Load-balancing service health monitors
A Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) health monitor is a process that does periodic health checks on each back end member server to pre-emptively detect failed servers and temporarily pull them out of the pool.
If the health monitor detects a failed server, it removes the server from the pool and marks the member in ERROR
. After you have corrected the server and it is functional again, the health monitor automatically changes the status of the member from ERROR
to ONLINE
, and resumes passing traffic to it.
Always use health monitors in production load balancers. If you do not have a health monitor, failed servers are not removed from the pool. This can lead to service disruption for web clients.
There are several types of health monitors, as briefly described here:
- HTTP
-
by default, probes the
/
path on the application server. - HTTPS
operates exactly like HTTP health monitors, but with TLS back end servers.
If the servers perform client certificate validation, HAProxy does not have a valid certificate. In these cases, TLS-HELLO health monitoring is an alternative.
- TLS-HELLO
ensures that the back end server responds to SSLv3-client hello messages.
A TLS-HELLO health monitor does not check any other health metrics, like status code or body contents.
- PING
sends periodic ICMP ping requests to the back end servers.
You must configure back end servers to allow PINGs so that these health checks pass.
ImportantA PING health monitor checks only if the member is reachable and responds to ICMP echo requests. PING health monitors do not detect if the application that runs on an instance is healthy. Use PING health monitors only in cases where an ICMP echo request is a valid health check.
- TCP
opens a TCP connection to the back end server protocol port.
The TCP application opens a TCP connection and, after the TCP handshake, closes the connection without sending any data.
- UDP-CONNECT
performs a basic UDP port connect.
A UDP-CONNECT health monitor might not work correctly if Destination Unreachable (ICMP type 3) is not enabled on the member server, or if it is blocked by a security rule. In these cases, a member server might be marked as having an operating status of
ONLINE
when it is actually down.
4.7. Creating Load-balancing service health monitors
Use Load-balancing service (octavia) health monitors to avoid service disruptions for your users. The health monitors run periodic health checks on each back end server to pre-emptively detect failed servers and temporarily pull the servers out of the pool in your Red Hat OpenStack Services on OpenShift (RHOSO) environment.
Prerequisites
-
The administrator has created a project for you and has provided you with a
clouds.yaml
file for you to access the cloud. The
python-openstackclient
package resides on your workstation.$ dnf list installed python-openstackclient
Procedure
Confirm that the system
OS_CLOUD
variable is set for your cloud:$ echo $OS_CLOUD my_cloud
Reset the variable if necessary:
$ export OS_CLOUD=my_other_cloud
As an alternative, you can specify the cloud name by adding the
--os-cloud <cloud_name>
option each time you run anopenstack
command.Run the
openstack loadbalancer healthmonitor create
command, using argument values that are appropriate for your site.All health monitor types require the following configurable arguments:
<pool>
- Name or ID of the pool of back-end member servers to be monitored.
--type
-
The type of health monitor. One of
HTTP
,HTTPS
,PING
,SCTP
,TCP
,TLS-HELLO
, orUDP-CONNECT
. --delay
- Number of seconds to wait between health checks.
--timeout
-
Number of seconds to wait for any given health check to complete.
timeout
must always be smaller thandelay
. --max-retries
- Number of health checks a back-end server must fail before it is considered down. Also, the number of health checks that a failed back-end server must pass to be considered up again.
In addition, HTTP health monitor types also require the following arguments, which are set by default:
--url-path
-
Path part of the URL that should be retrieved from the back-end server. By default this is
/
. --http-method
-
HTTP method that is used to retrieve the
url_path
. By default this isGET
. --expected-codes
List of HTTP status codes that indicate an OK health check. By default this is
200
.Example
$ openstack loadbalancer healthmonitor create --name my-health-monitor --delay 10 --max-retries 4 --timeout 5 --type TCP lb-pool-1
Verification
-
Run the
openstack loadbalancer healthmonitor list
command and verify that your health monitor is running.
4.8. Modifying Load-balancing service health monitors
You can modify the configuration for Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) health monitors when you want to change the interval for sending probes to members, the connection timeout interval, the HTTP method for requests, and so on.
Prerequisites
-
The administrator has created a project for you and has provided you with a
clouds.yaml
file for you to access the cloud. The
python-openstackclient
package resides on your workstation.$ dnf list installed python-openstackclient
Procedure
Confirm that the system
OS_CLOUD
variable is set for your cloud:$ echo $OS_CLOUD my_cloud
Reset the variable if necessary:
$ export OS_CLOUD=my_other_cloud
As an alternative, you can specify the cloud name by adding the
--os-cloud <cloud_name>
option each time you run anopenstack
command.Modify your health monitor (
my-health-monitor
).In this example, a user is changing the time in seconds that the health monitor waits between sending probes to members.
Example
$ openstack loadbalancer healthmonitor set my_health_monitor --delay 600
Verification
Run the
openstack loadbalancer healthmonitor show
command to confirm your configuration changes.$ openstack loadbalancer healthmonitor show my_health_monitor
4.9. Deleting Load-balancing service health monitors
You can remove a Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) health monitor.
An alternative to deleting a health monitor is to disable it by using the openstack loadbalancer healthmonitor set --disable
command.
Prerequisites
-
The administrator has created a project for you and has provided you with a
clouds.yaml
file for you to access the cloud. The
python-openstackclient
package resides on your workstation.$ dnf list installed python-openstackclient
Procedure
Confirm that the system
OS_CLOUD
variable is set for your cloud:$ echo $OS_CLOUD my_cloud
Reset the variable if necessary:
$ export OS_CLOUD=my_other_cloud
As an alternative, you can specify the cloud name by adding the
--os-cloud <cloud_name>
option each time you run anopenstack
command.Delete the health monitor (
my-health-monitor
).Example
$ openstack loadbalancer healthmonitor delete my-health-monitor
Verification
-
Run the
openstack loadbalancer healthmonitor list
command to verify that the health monitor you deleted no longer exists.
4.10. Best practices for Load-balancing service HTTP health monitors
In Red Hat OpenStack Services on OpenShift (RHOSO) environments, when you write the code that generates the health check in your web application, use the following best practices:
-
The health monitor
url-path
does not require authentication to load. -
By default, the health monitor
url-path
returns anHTTP 200 OK
status code to indicate a healthy server unless you specify alternateexpected-codes
. The health check does enough internal checks to ensure that the application is healthy and no more. Ensure that the following conditions are met for the application:
- Any required database or other external storage connections are up and running.
- The load is acceptable for the server on which the application runs.
- Your site is not in maintenance mode.
- Tests specific to your application are operational.
The page generated by the health check should be small in size:
- It returns in a sub-second interval.
- It does not induce significant load on the application server.
The page generated by the health check is never cached, although the code that runs the health check might reference cached data.
For example, you might find it useful to run a more extensive health check using cron and store the results to disk. The code that generates the page at the health monitor
url-path
incorporates the results of this cron job in the tests it performs.-
Because the Load-balancing service only processes the HTTP status code returned, and because health checks are run so frequently, you can use the
HEAD
orOPTIONS
HTTP methods to skip processing the entire page.
Chapter 5. Creating non-secure HTTP load balancers
This content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
In Red Hat OpenStack Services on OpenShift (RHOSO) environments, you can create the following load balancers for non-secure HTTP network traffic:
5.1. Creating an HTTP load balancer with a health monitor
For networks that are not compatible with Red Hat OpenStack Services on OpenShift (RHOSO) Networking service (neutron) floating IPs, create a load balancer to manage network traffic for non-secure HTTP applications. Create a health monitor to ensure that your back-end members remain available.
Prerequisites
- A shared external (public) subnet that you can reach from the internet.
-
The administrator has created a project for you and has provided you with a
clouds.yaml
file for you to access the cloud. The
python-openstackclient
package resides on your workstation.$ dnf list installed python-openstackclient
Procedure
Confirm that the system
OS_CLOUD
variable is set for your cloud:$ echo $OS_CLOUD my_cloud
Reset the variable if necessary:
$ export OS_CLOUD=my_other_cloud
As an alternative, you can specify the cloud name by adding the
--os-cloud <cloud_name>
option each time you run anopenstack
command.Create a load balancer (
lb1
) on a public subnet (public_subnet
).NoteValues inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site.
Example
$ openstack loadbalancer create --name lb1 \ --vip-subnet-id public_subnet --wait
Create a listener (
listener1
) on a port (80
).Example
$ openstack loadbalancer listener create --name listener1 \ --protocol HTTP --protocol-port 80 lb1
Verify the state of the listener.
Example
$ openstack loadbalancer listener show listener1
Before going to the next step, ensure that the status is
ACTIVE
.Create the listener default pool (
pool1
).Example
$ openstack loadbalancer pool create --name pool1 \ --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP
Create a health monitor (
healthmon1
) of type (HTTP
) on the pool (pool1
) that connects to the back-end servers and tests the path (/
).Health checks are recommended but not required. If no health monitor is defined, the member server is assumed to be
ONLINE
.Example
$ openstack loadbalancer healthmonitor create --name healthmon1 \ --delay 15 --max-retries 4 --timeout 10 --type HTTP --url-path / pool1
Add load balancer members (
192.0.2.10
and192.0.2.11
) on the private subnet (private_subnet
) to the default pool.Example
In this example, the back-end servers,
192.0.2.10
and192.0.2.11
, are namedmember1
andmember2
, respectively:$ openstack loadbalancer member create --name member1 --subnet-id \ private_subnet --address 192.0.2.10 --protocol-port 80 pool1 $ openstack loadbalancer member create --name member2 --subnet-id \ private_subnet --address 192.0.2.11 --protocol-port 80 pool1
Verification
View and verify the load balancer (lb1) settings:
Example
$ openstack loadbalancer show lb1
Sample output
+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2024-01-15T11:11:09 | | description | | | flavor | | | id | 788fe121-3dec-4e1b-8360-4020642238b0 | | listeners | 09f28053-fde8-4c78-88b9-0f191d84120e | | name | lb1 | | operating_status | ONLINE | | pools | 627842b3-eed8-4f5f-9f4a-01a738e64d6a | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | provider | amphora | | provisioning_status | ACTIVE | | updated_at | 2024-01-15T11:12:13 | | vip_address | 198.51.100.12 | | vip_network_id | 9bca13be-f18d-49a5-a83d-9d487827fd16 | | vip_port_id | 69a85edd-5b1c-458f-96f2-b4552b15b8e6 | | vip_qos_policy_id | None | | vip_subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | +---------------------+--------------------------------------+
When a health monitor is present and functioning properly, you can check the status of each member.
A working member (
member1
) has anONLINE
value for itsoperating_status
.Example
$ openstack loadbalancer member show pool1 member1
Sample output
+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | address | 192.0.2.10 | | admin_state_up | True | | created_at | 2024-01-15T11:16:23 | | id | b85c807e-4d7c-4cbd-b725-5e8afddf80d2 | | name | member1 | | operating_status | ONLINE | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | protocol_port | 80 | | provisioning_status | ACTIVE | | subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | | updated_at | 2024-01-15T11:20:45 | | weight | 1 | | monitor_port | None | | monitor_address | None | | backup | False | +---------------------+--------------------------------------+
5.2. Creating an HTTP load balancer that uses a floating IP
To manage network traffic for non-secure HTTP applications, create a Red Hat OpenStack Services on OpenShift (RHOSO) load balancer with a virtual IP (VIP) that depends on a floating IP. The advantage of using a floating IP is that you retain control of the assigned IP, which is necessary if you need to move, destroy, or recreate your load balancer. It is a best practice to also create a health monitor to ensure that your back-end members remain available.
Floating IPs do not work with IPv6 networks.
Prerequisites
- A floating IP to use with a load balancer VIP.
- A RHOSO Networking service (neutron) shared external (public) subnet that you can reach from the internet to use for the floating IP.
-
The administrator has created a project for you and has provided you with a
clouds.yaml
file for you to access the cloud. The
python-openstackclient
package resides on your workstation.$ dnf list installed python-openstackclient
Procedure
Confirm that the system
OS_CLOUD
variable is set for your cloud:$ echo $OS_CLOUD my_cloud
Reset the variable if necessary:
$ export OS_CLOUD=my_other_cloud
As an alternative, you can specify the cloud name by adding the
--os-cloud <cloud_name>
option each time you run anopenstack
command.Create a load balancer (
lb1
) on a private subnet (private_subnet
).NoteValues inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site.
Example
$ openstack loadbalancer create --name lb1 \ --vip-subnet-id private_subnet --wait
-
In the output from the previous step, record the value of
load_balancer_vip_port_id
, because you must provide it in a later step. Create a listener (
listener1
) on a port (80
).Example
$ openstack loadbalancer listener create --name listener1 \ --protocol HTTP --protocol-port 80 lb1
Create the listener default pool (
pool1
).Example
The command in this example creates an HTTP pool that uses a private subnet containing back-end servers that host non-secure HTTP applications on TCP port 80:
$ openstack loadbalancer pool create --name pool1 \ --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP
Create a health monitor (
healthmon1
) of type (HTTP
) on the pool (pool1
) that connects to the back-end servers and tests the path (/
).Health checks are recommended but not required. If no health monitor is defined, the member server is assumed to be
ONLINE
.Example
$ openstack loadbalancer healthmonitor create --name healthmon1 \ --delay 15 --max-retries 4 --timeout 10 --type HTTP --url-path / pool1
Add load balancer members (
192.0.2.10
and192.0.2.11
) on the private subnet to the default pool.Example
In this example, the back-end servers,
192.0.2.10
and192.0.2.11
, are namedmember1
andmember2
, respectively:$ openstack loadbalancer member create --name member1 --subnet-id \ private_subnet --address 192.0.2.10 --protocol-port 80 pool1 $ openstack loadbalancer member create --name member2 --subnet-id \ private_subnet --address 192.0.2.11 --protocol-port 80 pool1
Create a floating IP address on the shared external subnet (
public
).Example
$ openstack floating ip create public
-
In the output from step 8, record the value of
floating_ip_address
, because you must provide it in a later step. Associate this floating IP (
203.0.113.0
) with the load balancervip_port_id
(69a85edd-5b1c-458f-96f2-b4552b15b8e6
).Example
$ openstack floating ip set --port 69a85edd-5b1c-458f-96f2-b4552b15b8e6 203.0.113.0
Verification
Verify HTTP traffic flows across the load balancer by using the floating IP (
203.0.113.0
).Example
$ curl -v http://203.0.113.0 --insecure
Sample output
* About to connect() to 203.0.113.0 port 80 (#0) * Trying 203.0.113.0... * Connected to 203.0.113.0 (203.0.113.0) port 80 (#0) > GET / HTTP/1.1 > User-Agent: curl/7.29.0 > Host: 203.0.113.0 > Accept: */* > < HTTP/1.1 200 OK < Content-Length: 30 < * Connection #0 to host 203.0.113.0 left intact
When a health monitor is present and functioning properly, you can check the status of each member.
A working member (
member1
) has anONLINE
value for itsoperating_status
.Example
$ openstack loadbalancer member show pool1 member1
Sample output
+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | address | 192.0.02.10 | | admin_state_up | True | | created_at | 2024-01-15T11:11:23 | | id | b85c807e-4d7c-4cbd-b725-5e8afddf80d2 | | name | member1 | | operating_status | ONLINE | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | protocol_port | 80 | | provisioning_status | ACTIVE | | subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | | updated_at | 2024-01-15T11:28:42 | | weight | 1 | | monitor_port | None | | monitor_address | None | | backup | False | +---------------------+--------------------------------------+
5.3. Creating an HTTP load balancer with session persistence
To manage network traffic for non-secure HTTP applications, you can create Red Hat OpenStack Services on OpenShift (RHOSO) load balancers that track session persistence. Doing so ensures that when a request comes in, the load balancer directs subsequent requests from the same client to the same back-end server. Session persistence optimizes load balancing by saving time and memory.
Prerequisites
- A shared external (public) subnet that you can reach from the internet.
- The non-secure web applications whose network traffic you are load balancing have cookies enabled.
-
The administrator has created a project for you and has provided you with a
clouds.yaml
file for you to access the cloud. The
python-openstackclient
package resides on your workstation.$ dnf list installed python-openstackclient
Procedure
Confirm that the system
OS_CLOUD
variable is set for your cloud:$ echo $OS_CLOUD my_cloud
Reset the variable if necessary:
$ export OS_CLOUD=my_other_cloud
As an alternative, you can specify the cloud name by adding the
--os-cloud <cloud_name>
option each time you run anopenstack
command.Create a load balancer (
lb1
) on a public subnet (public_subnet
).NoteValues inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site.
Example
$ openstack loadbalancer create --name lb1 \ --vip-subnet-id public_subnet --wait
Create a listener (
listener1
) on a port (80
).Example
$ openstack loadbalancer listener create --name listener1 \ --protocol HTTP --protocol-port 80 lb1
Create the listener default pool (
pool1
) that defines session persistence on a cookie (PHPSESSIONID
).Example
The command in this example creates an HTTP pool that uses a private subnet containing back-end servers that host non-secure HTTP applications on TCP port 80:
$ openstack loadbalancer pool create --name pool1 \ --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP \ --session-persistence type=APP_COOKIE,cookie_name=PHPSESSIONID
Create a health monitor (
healthmon1
) of type (HTTP
) on the pool (pool1
) that connects to the back-end servers and tests the path (/
).Health checks are recommended but not required. If no health monitor is defined, the member server is assumed to be
ONLINE
.Example
$ openstack loadbalancer healthmonitor create --name healthmon1 \ --delay 15 --max-retries 4 --timeout 10 --type HTTP --url-path / pool1
Add load balancer members (
192.0.2.10
and192.0.2.11
) on the private subnet (private_subnet
) to the default pool.Example
In this example, the back-end servers,
192.0.2.10
and192.0.2.11
, are namedmember1
andmember2
, respectively:$ openstack loadbalancer member create --name member1 --subnet-id \ private_subnet --address 192.0.2.10 --protocol-port 80 pool1 $ openstack loadbalancer member create --name member2 --subnet-id \ private_subnet --address 192.0.2.11 --protocol-port 80 pool1
Verification
View and verify the load balancer (
lb1
) settings:Example
$ openstack loadbalancer show lb1
Sample output
+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2024-01-15T11:11:58 | | description | | | flavor | | | id | 788fe121-3dec-4e1b-8360-4020642238b0 | | listeners | 09f28053-fde8-4c78-88b9-0f191d84120e | | name | lb1 | | operating_status | ONLINE | | pools | 627842b3-eed8-4f5f-9f4a-01a738e64d6a | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | provider | amphora | | provisioning_status | ACTIVE | | updated_at | 2024-01-15T11:28:42 | | vip_address | 198.51.100.22 | | vip_network_id | 9bca13be-f18d-49a5-a83d-9d487827fd16 | | vip_port_id | 69a85edd-5b1c-458f-96f2-b4552b15b8e6 | | vip_qos_policy_id | None | | vip_subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | +---------------------+--------------------------------------+
When a health monitor is present and functioning properly, you can check the status of each member.
A working member (
member1
) has anONLINE
value for itsoperating_status
.Example
$ openstack loadbalancer member show pool1 member1
Sample output
+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | address | 192.0.02.10 | | admin_state_up | True | | created_at | 2024-01-15T11:11:23 | | id | b85c807e-4d7c-4cbd-b725-5e8afddf80d2 | | name | member1 | | operating_status | ONLINE | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | protocol_port | 80 | | provisioning_status | ACTIVE | | subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | | updated_at | 2024-01-15T11:28:42 | | weight | 1 | | monitor_port | None | | monitor_address | None | | backup | False | +---------------------+--------------------------------------+
Chapter 6. Creating secure HTTP load balancers
This content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
In Red Hat OpenStack Services on OpenShift (RHOSO) environments, you can create various types of load balancers to manage secure HTTP (HTTPS) network traffic:
- Section 6.1, “About non-terminated HTTPS load balancers”
- Section 6.2, “Creating a non-terminated HTTPS load balancer”
- Section 6.3, “About TLS-terminated HTTPS load balancers”
- Section 6.4, “Creating a TLS-terminated HTTPS load balancer”
- Section 6.5, “Creating a TLS-terminated HTTPS load balancer with SNI”
- Section 6.6, “Creating a TLS-terminated load balancer with an HTTP/2 listener”
- Section 6.7, “Creating HTTP and TLS-terminated HTTPS load balancing on the same IP and back-end”
6.1. About non-terminated HTTPS load balancers
A non-terminated HTTPS load balancer acts effectively like a generic TCP load balancer: the load balancer forwards the raw TCP traffic from the web client to the back-end servers where the HTTPS connection is terminated with the web clients. While non-terminated HTTPS load balancers do not support advanced load balancer features like Layer 7 functionality, they do lower load balancer resource utilization by managing the certificates and keys themselves.
6.2. Creating a non-terminated HTTPS load balancer
If your application requires HTTPS traffic to terminate on the back-end member servers, typically called HTTPS pass through, you can use the HTTPS protocol for your Red Hat OpenStack Services on OpenShift (RHOSO) load balancer listeners in a RHOSO environment.
Prerequisites
-
The administrator has created a project for you and has provided you with a
clouds.yaml
file for you to access the cloud. The
python-openstackclient
package resides on your workstation.$ dnf list installed python-openstackclient
- A shared external (public) subnet that you can reach from the internet.
Procedure
Confirm that the system
OS_CLOUD
variable is set for your cloud:$ echo $OS_CLOUD my_cloud
Reset the variable if necessary:
$ export OS_CLOUD=my_other_cloud
As an alternative, you can specify the cloud name by adding the
--os-cloud <cloud_name>
option each time you run anopenstack
command.Create a load balancer (
lb1
) on a public subnet (public_subnet
).NoteValues inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site.
Example
$ openstack loadbalancer create --name lb1 \ --vip-subnet-id public_subnet --wait
Create a listener (
listener1
) on a port (443
).Example
$ openstack loadbalancer listener create --name listener1 \ --protocol HTTPS --protocol-port 443 lb1
Create the listener default pool (
pool1
).Example
The command in this example creates an HTTPS pool that uses a private subnet containing back-end servers that host HTTPS applications configured with a TLS-encrypted web application on TCP port 443:
$ openstack loadbalancer pool create --name pool1 \ --lb-algorithm ROUND_ROBIN --listener listener1 \ --protocol HTTPS
Create a health monitor (
healthmon1
) on the pool (pool1
) of type (TLS-HELLO
) that connects to the back-end servers and tests the path (/
).Health checks are recommended but not required. If no health monitor is defined, the member server is assumed to be
ONLINE
.Example
$ openstack loadbalancer healthmonitor create --name healthmon1 \ --delay 15 --max-retries 4 --timeout 10 --type TLS-HELLO \ --url-path / pool1
Add load balancer members (
192.0.2.10
and192.0.2.11
) on the private subnet (private_subnet
) to the default pool.Example
In this example, the back-end servers,
192.0.2.10
and192.0.2.11
, are namedmember1
andmember2
, respectively:$ openstack loadbalancer member create --name member1 --subnet-id \ private_subnet --address 192.0.2.10 --protocol-port 443 pool1 $ openstack loadbalancer member create --name member2 --subnet-id \ private_subnet --address 192.0.2.11 --protocol-port 443 pool1
Verification
View and verify the load balancer (
lb1
) settings.Example
$ openstack loadbalancer show lb1
Sample output
+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2024-01-15T11:11:09 | | description | | | flavor | | | id | 788fe121-3dec-4e1b-8360-4020642238b0 | | listeners | 09f28053-fde8-4c78-88b9-0f191d84120e | | name | lb1 | | operating_status | ONLINE | | pools | 627842b3-eed8-4f5f-9f4a-01a738e64d6a | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | provider | amphora | | provisioning_status | ACTIVE | | updated_at | 2024-01-15T11:12:42 | | vip_address | 198.51.100.11 | | vip_network_id | 9bca13be-f18d-49a5-a83d-9d487827fd16 | | vip_port_id | 69a85edd-5b1c-458f-96f2-b4552b15b8e6 | | vip_qos_policy_id | None | | vip_subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | +---------------------+--------------------------------------+
When a health monitor is present and functioning properly, you can check the status of each member.
Example
A working member (
member1
) has anONLINE
value for itsoperating_status
.$ openstack loadbalancer member show pool1 member1
Sample output
+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | address | 192.0.2.10 | | admin_state_up | True | | created_at | 2024-01-15T11:11:09 | | id | b85c807e-4d7c-4cbd-b725-5e8afddf80d2 | | name | member1 | | operating_status | ONLINE | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | protocol_port | 443 | | provisioning_status | ACTIVE | | subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | | updated_at | 2024-01-15T11:12:42 | | weight | 1 | | monitor_port | None | | monitor_address | None | | backup | False | +---------------------+--------------------------------------+
6.3. About TLS-terminated HTTPS load balancers
When a TLS-terminated HTTPS load balancer is implemented in a Red Hat OpenStack Services on OpenShift (RHOSO) environment, web clients communicate with the load balancer over Transport Layer Security (TLS) protocols. The load balancer terminates the TLS session and forwards the decrypted requests to the back-end servers. When you terminate the TLS session on the load balancer, you offload the CPU-intensive encryption operations to the load balancer, and allow the load balancer to use advanced features such as Layer 7 inspection.
6.4. Creating a TLS-terminated HTTPS load balancer
When you use TLS-terminated HTTPS load balancers, you offload the CPU-intensive encryption operations to the load balancer, and allow the load balancer to use advanced features such as Layer 7 inspection. In Red Hat OpenStack Services on OpenShift (RHOSO) environments, it is a best practice to also create a health monitor to ensure that your back-end members remain available.
Prerequisites
-
The administrator has created a project for you and has provided you with a
clouds.yaml
file for you to access the cloud. The
python-openstackclient
package resides on your workstation.$ dnf list installed python-openstackclient
- A shared external (public) subnet that you can reach from the internet.
TLS public-key cryptography is configured with the following characteristics:
-
A TLS certificate, key, and intermediate certificate chain is obtained from an external certificate authority (CA) for the DNS name that is assigned to the load balancer VIP address, for example,
www.example.com
. - The certificate, key, and intermediate certificate chain reside in separate files in the current directory.
- The key and certificate are PEM-encoded.
- The intermediate certificate chain contains multiple certificates that are PEM-encoded and concatenated together.
-
A TLS certificate, key, and intermediate certificate chain is obtained from an external certificate authority (CA) for the DNS name that is assigned to the load balancer VIP address, for example,
- You must configure the Load-balancing service (octavia) to use the Key Manager service (barbican). For more information, see the Managing secrets with the Key Manager service guide.
Procedure
Combine the key (
server.key
), certificate (server.crt
), and intermediate certificate chain (ca-chain.crt
) into a single PKCS12 file (server.p12
).NoteValues inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site.
Example
$ openssl pkcs12 -export -inkey server.key -in server.crt \ -certfile ca-chain.crt -passout pass: -out server.p12
NoteThe following procedure does not work if you password protect the PKCS12 file.
Confirm that the system
OS_CLOUD
variable is set for your cloud:$ echo $OS_CLOUD my_cloud
Reset the variable if necessary:
$ export OS_CLOUD=my_other_cloud
As an alternative, you can specify the cloud name by adding the
--os-cloud <cloud_name>
option each time you run anopenstack
command.Use the Key Manager service to create a secret resource (
tls_secret1
) for the PKCS12 file.Example
$ openstack secret store --name='tls_secret1' \ -t 'application/octet-stream' -e 'base64' \ --payload="$(base64 < server.p12)"
Create a load balancer (
lb1
) on the public subnet (public_subnet
).Example
$ openstack loadbalancer create --name lb1 \ --vip-subnet-id public_subnet --wait
Create a
TERMINATED_HTTPS
listener (listener1
), and reference the secret resource as the default TLS container for the listener.Example
$ openstack loadbalancer listener create --protocol-port 443 \ --protocol TERMINATED_HTTPS \ --default-tls-container=\ $(openstack secret list | awk '/ tls_secret1 / {print $2}') lb1
Create a pool (
pool1
) and make it the default pool for the listener.Example
The command in this example creates an HTTP pool that uses a private subnet containing back-end servers that host non-secure HTTP applications on TCP port 80:
$ openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP
Create a health monitor (
healthmon1
) of type (HTTP
) on the pool (pool1
) that connects to the back-end servers and tests the path (/
).Health checks are recommended but not required. If no health monitor is defined, the member server is assumed to be
ONLINE
.Example
$ openstack loadbalancer healthmonitor create --name healthmon1 \ --delay 15 --max-retries 4 --timeout 10 --type HTTP --url-path / pool1
Add the non-secure HTTP back-end servers (
192.0.2.10
and192.0.2.11
) on the private subnet (private_subnet
) to the pool.Example
In this example, the back-end servers,
192.0.2.10
and192.0.2.11
, are namedmember1
andmember2
, respectively:$ openstack loadbalancer member create --name member1 --subnet-id \ private_subnet --address 192.0.2.10 --protocol-port 443 pool1 $ openstack loadbalancer member create --name member2 --subnet-id \ private_subnet --address 192.0.2.11 --protocol-port 443 pool1
Verification
View and verify the load balancer (
lb1
) settings.Example
$ openstack loadbalancer show lb1
Sample output
+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2024-01-15T11:11:09 | | description | | | flavor | | | id | 788fe121-3dec-4e1b-8360-4020642238b0 | | listeners | 09f28053-fde8-4c78-88b9-0f191d84120e | | name | lb1 | | operating_status | ONLINE | | pools | 627842b3-eed8-4f5f-9f4a-01a738e64d6a | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | provider | amphora | | provisioning_status | ACTIVE | | updated_at | 2024-01-15T11:12:42 | | vip_address | 198.51.100.11 | | vip_network_id | 9bca13be-f18d-49a5-a83d-9d487827fd16 | | vip_port_id | 69a85edd-5b1c-458f-96f2-b4552b15b8e6 | | vip_qos_policy_id | None | | vip_subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | +---------------------+--------------------------------------+
When a health monitor is present and functioning properly, you can check the status of each member.
Example
$ openstack loadbalancer member show pool1 member1
A working member (
member1
) has anONLINE
value for itsoperating_status
:Sample output
+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | address | 192.0.2.10 | | admin_state_up | True | | created_at | 2024-01-15T11:11:09 | | id | b85c807e-4d7c-4cbd-b725-5e8afddf80d2 | | name | member1 | | operating_status | ONLINE | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | protocol_port | 80 | | provisioning_status | ACTIVE | | subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | | updated_at | 2024-01-15T11:12:42 | | weight | 1 | | monitor_port | None | | monitor_address | None | | backup | False | +---------------------+--------------------------------------+
6.5. Creating a TLS-terminated HTTPS load balancer with SNI
For TLS-terminated HTTPS load balancers that employ Server Name Indication (SNI) technology, a single listener can contain multiple TLS certificates and enable the load balancer to know which certificate to present when it uses a shared IP. In Red Hat OpenStack Services on OpenShift (RHOSO) environments, it is a best practice to also create a health monitor to ensure that your back-end members remain available.
Prerequisites
-
The administrator has created a project for you and has provided you with a
clouds.yaml
file for you to access the cloud. The
python-openstackclient
package resides on your workstation.$ dnf list installed python-openstackclient
- A shared external (public) subnet that you can reach from the internet.
TLS public-key cryptography is configured with the following characteristics:
-
Multiple TLS certificates, keys, and intermediate certificate chains have been obtained from an external certificate authority (CA) for the DNS names assigned to the load balancer VIP address, for example,
www.example.com
andwww2.example.com
. - The keys and certificates are PEM-encoded.
-
Multiple TLS certificates, keys, and intermediate certificate chains have been obtained from an external certificate authority (CA) for the DNS names assigned to the load balancer VIP address, for example,
- You must configure the Load-balancing service (octavia) to use the Key Manager service (barbican). For more information, see the Managing secrets with the Key Manager service guide.
Procedure
For each of the TLS certificates in the SNI list, combine the key (
server.key
), certificate (server.crt
), and intermediate certificate chain (ca-chain.crt
) into a single PKCS12 file (server.p12
).In this example, you create two PKCS12 files (
server.p12
andserver2.p12
) one for each certificate (www.example.com
andwww2.example.com
).NoteValues inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site.
Example
$ openssl pkcs12 -export -inkey server.key -in server.crt \ -certfile ca-chain.crt -passout pass: -out server.p12 $ openssl pkcs12 -export -inkey server2.key -in server2.crt \ -certfile ca-chain2.crt -passout pass: -out server2.p12
Confirm that the system
OS_CLOUD
variable is set for your cloud:$ echo $OS_CLOUD my_cloud
Reset the variable if necessary:
$ export OS_CLOUD=my_other_cloud
As an alternative, you can specify the cloud name by adding the
--os-cloud <cloud_name>
option each time you run anopenstack
command.Use the Key Manager service to create secret resources (
tls_secret1
andtls_secret2
) for the PKCS12 file.Example
$ openstack secret store --name='tls_secret1' \ -t 'application/octet-stream' -e 'base64' \ --payload="$(base64 < server.p12)" $ openstack secret store --name='tls_secret2' \ -t 'application/octet-stream' -e 'base64' \ --payload="$(base64 < server2.p12)"
Create a load balancer (
lb1
) on the public subnet (public_subnet
).Example
$ openstack loadbalancer create --name lb1 \ --vip-subnet-id public_subnet --wait
Create a TERMINATED_HTTPS listener (
listener1
), and use SNI to reference both the secret resources.(Reference
tls_secret1
as the default TLS container for the listener.)Example
$ openstack loadbalancer listener create --name listener1 \ --protocol-port 443 --protocol TERMINATED_HTTPS \ --default-tls-container=\ $(openstack secret list | awk '/ tls_secret1 / {print $2}') \ --sni-container-refs \ $(openstack secret list | awk '/ tls_secret1 / {print $2}') \ $(openstack secret list | awk '/ tls_secret2 / {print $2}') -- lb1
Create a pool (
pool1
) and make it the default pool for the listener.Example
The command in this example creates an HTTP pool that uses a private subnet containing back-end servers that host non-secure HTTP applications on TCP port 80:
$ openstack loadbalancer pool create --name pool1 \ --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP
Create a health monitor (
healthmon1
) of type (HTTP
) on the pool (pool1
) that connects to the back-end servers and tests the path (/
).Health checks are recommended but not required. If no health monitor is defined, the member server is assumed to be
ONLINE
.Example
$ openstack loadbalancer healthmonitor create --name healthmon1 \ --delay 15 --max-retries 4 --timeout 10 --type HTTP --url-path / pool1
Add the non-secure HTTP back-end servers (
192.0.2.10
and192.0.2.11
) on the private subnet (private_subnet
) to the pool.Example
In this example, the back-end servers,
192.0.2.10
and192.0.2.11
, are namedmember1
andmember2
, respectively:$ openstack loadbalancer member create --name member1 --subnet-id \ private_subnet --address 192.0.2.10 --protocol-port 443 pool1 $ openstack loadbalancer member create --name member2 --subnet-id \ private_subnet --address 192.0.2.11 --protocol-port 443 pool1
Verification
View and verify the load balancer (
lb1
) settings.Example
$ openstack loadbalancer show lb1
Sample output
+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2024-01-15T11:11:09 | | description | | | flavor | | | id | 788fe121-3dec-4e1b-8360-4020642238b0 | | listeners | 09f28053-fde8-4c78-88b9-0f191d84120e | | name | lb1 | | operating_status | ONLINE | | pools | 627842b3-eed8-4f5f-9f4a-01a738e64d6a | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | provider | amphora | | provisioning_status | ACTIVE | | updated_at | 2024-01-15T11:12:42 | | vip_address | 198.51.100.11 | | vip_network_id | 9bca13be-f18d-49a5-a83d-9d487827fd16 | | vip_port_id | 69a85edd-5b1c-458f-96f2-b4552b15b8e6 | | vip_qos_policy_id | None | | vip_subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | +---------------------+--------------------------------------+
When a health monitor is present and functioning properly, you can check the status of each member.
Example
$ openstack loadbalancer member show pool1 member1
Sample output
A working member (
member1
) has anONLINE
value for itsoperating_status
:+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | address | 192.0.2.10 | | admin_state_up | True | | created_at | 2024-01-15T11:11:09 | | id | b85c807e-4d7c-4cbd-b725-5e8afddf80d2 | | name | member1 | | operating_status | ONLINE | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | protocol_port | 80 | | provisioning_status | ACTIVE | | subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | | updated_at | 2024-01-15T11:12:42 | | weight | 1 | | monitor_port | None | | monitor_address | None | | backup | False | +---------------------+--------------------------------------+
6.6. Creating a TLS-terminated load balancer with an HTTP/2 listener
When you use TLS-terminated HTTPS load balancers, you offload the CPU-intensive encryption operations to the load balancer, and allow the load balancer to use advanced features such as Layer 7 inspection. With the addition of an HTTP/2 listener, you can leverage the HTTP/2 protocol to improve performance by loading pages faster. Load balancers negotiate HTTP/2 with clients by using the Application-Layer Protocol Negotiation (ALPN) TLS extension.
The Load-balancing service (octavia) supports end-to-end HTTP/2 traffic, which means that the HTTP2 traffic is not translated by HAProxy from the point where the request reaches the listener until the response returns from the load balancer. To achieve end-to-end HTTP/2 traffic, you must have an HTTP pool with back-end re-encryption: pool members that are listening on a secure port and web applications that are configured for HTTPS traffic.
You can send HTTP/2 traffic to an HTTP pool without back-end re-encryption. In this situation, HAProxy translates the traffic before it reaches the pool, and the response is translated back to HTTP/2 before it returns from the load balancer.
Red Hat recommends that you create a health monitor to ensure that your back-end members remain available in your Red Hat OpenStack Services on OpenShift (RHOSO) environment.
Currently, the Load-balancing service does not support health monitoring for TLS-terminated load balancers that use HTTP/2 listeners.
Prerequisites
-
The administrator has created a project for you and has provided you with a
clouds.yaml
file for you to access the cloud. The
python-openstackclient
package resides on your workstation.$ dnf list installed python-openstackclient
TLS public-key cryptography is configured with the following characteristics:
-
A TLS certificate, key, and intermediate certificate chain is obtained from an external certificate authority (CA) for the DNS name that is assigned to the load balancer VIP address, for example,
www.example.com
. - The certificate, key, and intermediate certificate chain reside in separate files in the current directory.
- The key and certificate are PEM-encoded.
- The intermediate certificate chain contains multiple certificates that are PEM-encoded and concatenated together.
-
A TLS certificate, key, and intermediate certificate chain is obtained from an external certificate authority (CA) for the DNS name that is assigned to the load balancer VIP address, for example,
- You must configure the Load-balancing service (octavia) to use the Key Manager service (barbican). For more information, see the Managing secrets with the Key Manager service guide.
Procedure
Combine the key (
server.key
), certificate (server.crt
), and intermediate certificate chain (ca-chain.crt
) into a single PKCS12 file (server.p12
).NoteValues inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site.
ImportantWhen you create the PKCS12 file, do not password protect the file.
Example
In this example, the PKCS12 file is created without a password:
$ openssl pkcs12 -export -inkey server.key -in server.crt \ -certfile ca-chain.crt -passout pass: -out server.p12
Use the Key Manager service to create a secret resource (
tls_secret1
) for the PKCS12 file.Example
$ openstack secret store --name='tls_secret1' \ -t 'application/octet-stream' -e 'base64' \ --payload="$(base64 < server.p12)"
Confirm that the system
OS_CLOUD
variable is set for your cloud:$ echo $OS_CLOUD my_cloud
Reset the variable if necessary:
$ export OS_CLOUD=my_other_cloud
As an alternative, you can specify the cloud name by adding the
--os-cloud <cloud_name>
option each time you run anopenstack
command.Create a load balancer (
lb1
) on the public subnet (public_subnet
).Example
$ openstack loadbalancer create --name lb1 --vip-subnet-id \ public_subnet --wait
Create a
TERMINATED_HTTPS
listener (listener1
) and do the following:-
reference the secret resource (
tls_secret1
) as the default TLS container for the listener. -
set the ALPN protocol (
h2
). set the fallback protocol if the client does not support HTTP/2 (
http/1.1
).Example
$ openstack loadbalancer listener create --name listener1 \ --protocol-port 443 --protocol TERMINATED_HTTPS --alpn-protocol h2 \ --alpn-protocol http/1.1 --default-tls-container=\ $(openstack secret list | awk '/ tls_secret1 / {print $2}') lb1
-
reference the secret resource (
Create a pool (
pool1
) and make it the default pool for the listener.Example
The command in this example creates an HTTP pool containing back-end servers that host HTTP applications configured with a web application on TCP port 80:
$ openstack loadbalancer pool create --name pool1 \ --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP
Create a health monitor (
healthmon1
) of type (TCP
) on the pool (pool1
) that connects to the back-end servers.Health checks are recommended but not required. If no health monitor is defined, the member server is assumed to be
ONLINE
.Example
$ openstack loadbalancer healthmonitor create --name healthmon1 \ --delay 15 --max-retries 4 --timeout 10 --type TCP pool1
Add the HTTP back-end servers (
192.0.2.10
and192.0.2.11
) on the private subnet (private_subnet
) to the pool.Example
In this example, the back-end servers,
192.0.2.10
and192.0.2.11
, are namedmember1
andmember2
, respectively:$ openstack loadbalancer member create --name member1 --subnet-id \ private_subnet --address 192.0.2.10 --protocol-port 80 pool1 $ openstack loadbalancer member create --name member2 --subnet-id \ private_subnet --address 192.0.2.11 --protocol-port 80 pool1
Verification
View and verify the load balancer (
lb1
) settings.Example
$ openstack loadbalancer status show lb1
Sample output
{ "loadbalancer": { "id": "936dad29-4c3f-4f24-84a8-c0e6f10ed810", "name": "lb1", "operating_status": "ONLINE", "provisioning_status": "ACTIVE", "listeners": [ { "id": "708b82c6-8a6b-4ec1-ae53-e619769821d4", "name": "listener1", "operating_status": "ONLINE", "provisioning_status": "ACTIVE", "pools": [ { "id": "5ad7c678-23af-4422-8edb-ac3880bd888b", "name": "pool1", "provisioning_status": "ACTIVE", "operating_status": "ONLINE", "health_monitor": { "id": "4ad786ef-6661-4e31-a325-eca07b2b3dd1", "name": "healthmon1", "type": "TCP", "provisioning_status": "ACTIVE", "operating_status": "ONLINE" }, "members": [ { "id": "facca0d3-61a7-4b46-85e8-da6994883647", "name": "member1", "operating_status": "ONLINE", "provisioning_status": "ACTIVE", "address": "192.0.2.10", "protocol_port": 80 }, { "id": "2b0d9e0b-8e0c-48b8-aa57-90b2fde2eae2", "name": "member2", "operating_status": "ONLINE", "provisioning_status": "ACTIVE", "address": "192.0.2.11", "protocol_port": 80 } ...
When a health monitor is present and functioning properly, you can check the status of each member.
Example
$ openstack loadbalancer member show pool1 member1
Sample output
A working member (
member1
) has anONLINE
value for itsoperating_status
:+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | address | 192.0.2.10 | | admin_state_up | True | | created_at | 2024-08-16T20:08:01 | | id | facca0d3-61a7-4b46-85e8-da6994883647 | | name | member1 | | operating_status | ONLINE | | project_id | 9b29c91f67314bd09eda9018616851cf | | protocol_port | 80 | | provisioning_status | ACTIVE | | subnet_id | 3b459c95-64d2-4cfa-b348-01aacc4b3fa9 | | updated_at | 2024-08-16T20:25:42 | | weight | 1 | | monitor_port | None | | monitor_address | None | | backup | False | | tags | | +---------------------+--------------------------------------+
6.7. Creating HTTP and TLS-terminated HTTPS load balancing on the same IP and back-end
You can configure a non-secure listener and a TLS-terminated HTTPS listener on the same load balancer and the same IP address when you want to respond to web clients with the exact same content, regardless if the client is connected with a secure or non-secure HTTP protocol. In Red Hat OpenStack Services on OpenShift (RHOSO) environments, it is a best practice to also create a health monitor to ensure that your back-end members remain available.
Prerequisites
-
The administrator has created a project for you and has provided you with a
clouds.yaml
file for you to access the cloud. The
python-openstackclient
package resides on your workstation.$ dnf list installed python-openstackclient
- A shared external (public) subnet that you can reach from the internet.
TLS public-key cryptography is configured with the following characteristics:
- A TLS certificate, key, and optional intermediate certificate chain have been obtained from an external certificate authority (CA) for the DNS name assigned to the load balancer VIP address (for example, www.example.com).
- The certificate, key, and intermediate certificate chain reside in separate files in the current directory.
- The key and certificate are PEM-encoded.
- The intermediate certificate chain contains multiple certificates that are PEM-encoded and concatenated together.
- You have configured the Load-balancing service (octavia) to use the Key Manager service (barbican). For more information, see the Managing secrets with the Key Manager service guide.
- The non-secure HTTP listener is configured with the same pool as the HTTPS TLS-terminated load balancer.
Procedure
Combine the key (
server.key
), certificate (server.crt
), and intermediate certificate chain (ca-chain.crt
) into a single PKCS12 file (server.p12
).NoteValues inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site.
Example
$ openssl pkcs12 -export -inkey server.key -in server.crt \ -certfile ca-chain.crt -passout pass: -out server.p12
Confirm that the system
OS_CLOUD
variable is set for your cloud:$ echo $OS_CLOUD my_cloud
Reset the variable if necessary:
$ export OS_CLOUD=my_other_cloud
As an alternative, you can specify the cloud name by adding the
--os-cloud <cloud_name>
option each time you run anopenstack
command.Use the Key Manager service to create a secret resource (
tls_secret1
) for the PKCS12 file.Example
$ openstack secret store --name='tls_secret1' \ -t 'application/octet-stream' -e 'base64' \ --payload="$(base64 < server.p12)"
Create a load balancer (
lb1
) on the public subnet (public_subnet
).Example
$ openstack loadbalancer create --name lb1 \ --vip-subnet-id external_subnet --wait
Create a TERMINATED_HTTPS listener (
listener1
), and reference the secret resource as the default TLS container for the listener.Example
$ openstack loadbalancer listener create --name listener1 \ --protocol-port 443 --protocol TERMINATED_HTTPS \ --default-tls-container=\ $(openstack secret list | awk '/ tls_secret1 / {print $2}') lb1
Create a pool (
pool1
) and make it the default pool for the listener.Example
The command in this example creates an HTTP pool that uses a private subnet containing back-end servers that host non-secure HTTP applications on TCP port 80:
$ openstack loadbalancer pool create --name pool1 \ --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP
Create a health monitor (
healthmon1
) of type (HTTP
) on the pool (pool1
) that connects to the back-end servers and tests the path (/
).Health checks are recommended but not required. If no health monitor is defined, the member server is assumed to be
ONLINE
.Example
$ openstack loadbalancer healthmonitor create --name healthmon1 \ --delay 15 --max-retries 4 --timeout 10 --type HTTP --url-path / pool1
Add the non-secure HTTP back-end servers (
192.0.2.10
and192.0.2.11
) on the private subnet (private_subnet
) to the pool.Example
In this example, the back-end servers,
192.0.2.10
and192.0.2.11
, are namedmember1
andmember2
, respectively:$ openstack loadbalancer member create --name member1 --subnet-id \ private_subnet --address 192.0.2.10 --protocol-port 443 pool1 $ openstack loadbalancer member create --name member2 --subnet-id \ private_subnet --address 192.0.2.11 --protocol-port 443 pool1
Create a non-secure, HTTP listener (
listener2
), and make its default pool, the same as the secure listener.Example
$ openstack loadbalancer listener create --name listener2 \ --protocol-port 80 --protocol HTTP --default-pool pool1 lb1
Verification
View and verify the load balancer (
lb1
) settings.Example
$ openstack loadbalancer show lb1
Sample output
+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2024-01-15T11:11:09 | | description | | | flavor | | | id | 788fe121-3dec-4e1b-8360-4020642238b0 | | listeners | 09f28053-fde8-4c78-88b9-0f191d84120e | | name | lb1 | | operating_status | ONLINE | | pools | 627842b3-eed8-4f5f-9f4a-01a738e64d6a | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | provider | amphora | | provisioning_status | ACTIVE | | updated_at | 2024-01-15T11:12:42 | | vip_address | 198.51.100.11 | | vip_network_id | 9bca13be-f18d-49a5-a83d-9d487827fd16 | | vip_port_id | 69a85edd-5b1c-458f-96f2-b4552b15b8e6 | | vip_qos_policy_id | None | | vip_subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | +---------------------+--------------------------------------+
When a health monitor is present and functioning properly, you can check the status of each member.
Example
$ openstack loadbalancer member show pool1 member1
Sample output
A working member (
member1
) has anONLINE
value for itsoperating_status
:+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | address | 192.0.2.10 | | admin_state_up | True | | created_at | 2024-01-15T11:11:09 | | id | b85c807e-4d7c-4cbd-b725-5e8afddf80d2 | | name | member1 | | operating_status | ONLINE | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | protocol_port | 80 | | provisioning_status | ACTIVE | | subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | | updated_at | 2024-01-15T11:12:42 | | weight | 1 | | monitor_port | None | | monitor_address | None | | backup | False | +---------------------+--------------------------------------+
Chapter 7. Creating other kinds of load balancers
This content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
In Red Hat OpenStack Services on OpenShift (RHOSO) environments, you use the Load-balancing service (octavia) to create the type of load balancer that matches the type of non-HTTP network traffic that you want to manage:
7.1. Creating a TCP load balancer
You can create a load balancer when you need to manage network traffic for non-HTTP, TCP-based services and applications. It is a best practice to also create a health monitor to ensure that your back-end members remain available.
Prerequisites
-
The administrator has created a project for you and has provided you with a
clouds.yaml
file for you to access the cloud. The
python-openstackclient
package resides on your workstation.$ dnf list installed python-openstackclient
- A shared external (public) subnet that you can reach from the internet.
Procedure
Confirm that the system
OS_CLOUD
variable is set for your cloud:$ echo $OS_CLOUD my_cloud
Reset the variable if necessary:
$ export OS_CLOUD=my_other_cloud
As an alternative, you can specify the cloud name by adding the
--os-cloud <cloud_name>
option each time you run anopenstack
command.Create a load balancer (
lb1
) on the public subnet (public_subnet
).NoteValues inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site.
Example
$ openstack loadbalancer create --name lb1 \ --vip-subnet-id public_subnet --wait
Create a
TCP
listener (listener1
) on the specified port (23456
) for which the custom application is configured.Example
$ openstack loadbalancer listener create --name listener1 \ --protocol TCP --protocol-port 23456 lb1
Create a pool (
pool1
) and make it the default pool for the listener.Example
In this example, a pool is created that uses a private subnet containing back-end servers that host a custom application on a specific TCP port:
$ openstack loadbalancer pool create --name pool1 \ --lb-algorithm ROUND_ROBIN --listener listener1 \ --protocol TCP
Create a health monitor (
healthmon1
) on the pool (pool1
) that connects to the back-end servers and probes the TCP service port.Example
Health checks are recommended but not required. If no health monitor is defined, the member server is assumed to be
ONLINE
.$ openstack loadbalancer healthmonitor create --name healthmon1 \ --delay 15 --max-retries 4 --timeout 10 --type TCP pool1
Add the back-end servers (
192.0.2.10
and192.0.2.11
) on the private subnet (private_subnet
) to the pool.Example
In this example, the back-end servers,
192.0.2.10
and192.0.2.11
, are namedmember1
andmember2
, respectively:$ openstack loadbalancer member create --name member1 --subnet-id \ private_subnet --address 192.0.2.10 --protocol-port 443 pool1 $ openstack loadbalancer member create --name member2 --subnet-id \ private_subnet --address 192.0.2.11 --protocol-port 443 pool1
Verification
View and verify the load balancer (
lb1
) settings.Example
$ openstack loadbalancer show lb1
Sample output
+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2024-01-15T11:11:09 | | description | | | flavor | | | id | 788fe121-3dec-4e1b-8360-4020642238b0 | | listeners | 09f28053-fde8-4c78-88b9-0f191d84120e | | name | lb1 | | operating_status | ONLINE | | pools | 627842b3-eed8-4f5f-9f4a-01a738e64d6a | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | provider | amphora | | provisioning_status | ACTIVE | | updated_at | 2024-01-15T11:12:42 | | vip_address | 198.51.100.11 | | vip_network_id | 9bca13be-f18d-49a5-a83d-9d487827fd16 | | vip_port_id | 69a85edd-5b1c-458f-96f2-b4552b15b8e6 | | vip_qos_policy_id | None | | vip_subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | +---------------------+--------------------------------------+
When a health monitor is present and functioning properly, you can check the status of each member. Use the following command to obtain a member ID:
Example
$ openstack loadbalancer member list pool1
A working member (
member1
) has anONLINE
value for itsoperating_status
.Example
$ openstack loadbalancer member show pool1 member1
Sample output
+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | address | 192.0.2.10 | | admin_state_up | True | | created_at | 2024-01-15T11:11:09 | | id | b85c807e-4d7c-4cbd-b725-5e8afddf80d2 | | name | member1 | | operating_status | ONLINE | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | protocol_port | 80 | | provisioning_status | ACTIVE | | subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | | updated_at | 2024-01-15T11:12:42 | | weight | 1 | | monitor_port | None | | monitor_address | None | | backup | False | +---------------------+--------------------------------------+
7.2. Creating a UDP load balancer with a health monitor
You can create a Red Hat OpenStack Services on OpenShift (RHOSO) load balancer when you need to manage network traffic on UDP ports. It is a best practice to also create a health monitor to ensure that your back-end members remain available.
Prerequisites
-
The administrator has created a project for you and has provided you with a
clouds.yaml
file for you to access the cloud. The
python-openstackclient
package resides on your workstation.$ dnf list installed python-openstackclient
- A shared external (public) subnet that you can reach from the internet.
- No security rules that block ICMP Destination Unreachable messages (ICMP type 3).
Procedure
Confirm that the system
OS_CLOUD
variable is set for your cloud:$ echo $OS_CLOUD my_cloud
Reset the variable if necessary:
$ export OS_CLOUD=my_other_cloud
As an alternative, you can specify the cloud name by adding the
--os-cloud <cloud_name>
option each time you run anopenstack
command.Create a load balancer (
lb1
) on a private subnet (private_subnet
).NoteValues inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site.
Example
$ openstack loadbalancer create --name lb1 \ --vip-subnet-id private_subnet --wait
Create a listener (
listener1
) on a port (1234
).Example
$ openstack loadbalancer listener create --name listener1 \ --protocol UDP --protocol-port 1234 lb1
Create the listener default pool (
pool1
).Example
The command in this example creates a pool that uses a private subnet containing back-end servers that host one or more applications configured to use UDP ports:
$ openstack loadbalancer pool create --name pool1 \ --lb-algorithm ROUND_ROBIN --listener listener1 --protocol UDP
Create a health monitor (
healthmon1
) on the pool (pool1
) that connects to the back-end servers by using UDP (UDP-CONNECT
).Health checks are recommended but not required. If no health monitor is defined, the member server is assumed to be
ONLINE
.Example
$ openstack loadbalancer healthmonitor create --name healthmon1 \ --delay 5 --max-retries 2 --timeout 3 --type UDP-CONNECT pool1
Add the back-end servers (
192.0.2.10
and192.0.2.11
) on the private subnet (private_subnet
) to the default pool.Example
In this example, the back-end servers,
192.0.2.10
and192.0.2.11
, are namedmember1
andmember2
, respectively:$ openstack loadbalancer member create --name member1 --subnet-id \ private_subnet --address 192.0.2.10 --protocol-port 1234 pool1 $ openstack loadbalancer member create --name member2 --subnet-id \ private_subnet --address 192.0.2.11 --protocol-port 1234 pool1
Verification
View and verify the load balancer (
lb1
) settings.Example
$ openstack loadbalancer show lb1
Sample output
+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2024-01-15T11:11:09 | | description | | | flavor | | | id | 788fe121-3dec-4e1b-8360-4020642238b0 | | listeners | 09f28053-fde8-4c78-88b9-0f191d84120e | | name | lb1 | | operating_status | ONLINE | | pools | 627842b3-eed8-4f5f-9f4a-01a738e64d6a | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | provider | amphora | | provisioning_status | ACTIVE | | updated_at | 2024-01-15T11:12:42 | | vip_address | 198.51.100.11 | | vip_network_id | 9bca13be-f18d-49a5-a83d-9d487827fd16 | | vip_port_id | 69a85edd-5b1c-458f-96f2-b4552b15b8e6 | | vip_qos_policy_id | None | | vip_subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | +---------------------+--------------------------------------+
When a health monitor is present and functioning properly, you can check the status of each member.
Example
$ openstack loadbalancer member show pool1 member1
A working member (
member1
) has anONLINE
value for itsoperating_status
.Sample output
+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | address | 192.0.2.10 | | admin_state_up | True | | created_at | 2024-01-15T11:11:09 | | id | b85c807e-4d7c-4cbd-b725-5e8afddf80d2 | | name | member1 | | operating_status | ONLINE | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | protocol_port | 1234 | | provisioning_status | ACTIVE | | subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | | updated_at | 2024-01-15T11:12:42 | | weight | 1 | | monitor_port | None | | monitor_address | None | | backup | False | +---------------------+--------------------------------------+
7.3. Creating a QoS-ruled load balancer
You can apply a Red Hat OpenStack Services on OpenShift (RHOSO) Networking service (neutron) Quality of Service (QoS) policy to virtual IP addresses (VIPs) that use load balancers. In this way, you can use a QoS policy to limit incoming or outgoing network traffic that the load balancer can manage. It is a best practice to also create a health monitor to ensure that your back-end members remain available.
Prerequisites
-
The administrator has created a project for you and has provided you with a
clouds.yaml
file for you to access the cloud. The
python-openstackclient
package resides on your workstation.$ dnf list installed python-openstackclient
- A shared external (public) subnet that you can reach from the internet.
- A QoS policy that contains bandwidth limit rules created for the Networking service.
Procedure
Confirm that the system
OS_CLOUD
variable is set for your cloud:$ echo $OS_CLOUD my_cloud
Reset the variable if necessary:
$ export OS_CLOUD=my_other_cloud
As an alternative, you can specify the cloud name by adding the
--os-cloud <cloud_name>
option each time you run anopenstack
command.Create a network bandwidth QoS policy (
qos_policy_bandwidth
) with a maximum 1024 kbps and a maximum burst rate of 1024 kb.NoteValues inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site.
Example
$ openstack network qos policy create qos_policy_bandwidth $ openstack network qos rule create --type bandwidth-limit --max-kbps 1024 --max-burst-kbits 1024 qos-policy-bandwidth
Create a load balancer (
lb1
) on the public subnet (public_subnet
) by using a QoS policy (qos-policy-bandwidth
).Example
$ openstack loadbalancer create --name lb1 \ --vip-subnet-id public_subnet \ --vip-qos-policy-id qos-policy-bandwidth --wait
Create a listener (
listener1
) on a port (80
).Example
$ openstack loadbalancer listener create --name listener1 \ --protocol HTTP --protocol-port 80 lb1
Create the listener default pool (
pool1
).Example
The command in this example creates an HTTP pool that uses a private subnet containing back-end servers that host an HTTP application on TCP port 80:
$ openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP
Create a health monitor (
healthmon1
) on the pool that connects to the back-end servers and tests the path (/
).Health checks are recommended but not required. If no health monitor is defined, the member server is assumed to be
ONLINE
.Example
$ openstack loadbalancer healthmonitor create --name healthmon1 \ --delay 15 --max-retries 4 --timeout 10 --type HTTP --url-path / \ pool1
Add load balancer members (
192.0.2.10
and192.0.2.11
) on the private subnet (private_subnet
) to the default pool.Example
In this example, the back-end servers,
192.0.2.10
and192.0.2.11
, are namedmember1
andmember2
, respectively:$ openstack loadbalancer member create --name member1 --subnet-id \ private_subnet --address 192.0.2.10 --protocol-port 443 pool1 $ openstack loadbalancer member create --name member2 --subnet-id \ private_subnet --address 192.0.2.11 --protocol-port 443 pool1
Verification
View and verify the listener (
listener1
) settings.Example
$ openstack loadbalancer list
Sample output
+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2024-01-15T11:11:09 | | description | | | flavor | | | id | 788fe121-3dec-4e1b-8360-4020642238b0 | | listeners | 09f28053-fde8-4c78-88b9-0f191d84120e | | name | lb1 | | operating_status | ONLINE | | pools | 627842b3-eed8-4f5f-9f4a-01a738e64d6a | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | provider | amphora | | provisioning_status | ACTIVE | | updated_at | 2024-01-15T11:12:42 | | vip_address | 198.51.100.11 | | vip_network_id | 9bca13be-f18d-49a5-a83d-9d487827fd16 | | vip_port_id | 69a85edd-5b1c-458f-96f2-b4552b15b8e6 | | vip_qos_policy_id | cdfc3398-997b-46eb-9db1-ebbd88f7de05 | | vip_subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | +---------------------+--------------------------------------+
In this example the parameter,
vip_qos_policy_id
, contains a policy ID.
7.4. Creating a load balancer with an access control list
You can create an access control list (ACL) to limit incoming traffic to a Red Hat OpenStack Services on OpenShift (RHOSO) listener to a set of allowed source IP addresses. Any other incoming traffic is rejected. It is a best practice to also create a health monitor to ensure that your back-end members remain available.
Prerequisites
-
The administrator has created a project for you and has provided you with a
clouds.yaml
file for you to access the cloud. The
python-openstackclient
package resides on your workstation.$ dnf list installed python-openstackclient
- A shared external (public) subnet that you can reach from the internet.
Procedure
Confirm that the system
OS_CLOUD
variable is set for your cloud:$ echo $OS_CLOUD my_cloud
Reset the variable if necessary:
$ export OS_CLOUD=my_other_cloud
As an alternative, you can specify the cloud name by adding the
--os-cloud <cloud_name>
option each time you run anopenstack
command.Create a load balancer (
lb1
) on the public subnet (public_subnet
).NoteValues inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site.
Example
$ openstack loadbalancer create --name lb1 --vip-subnet-id public_subnet --wait
Create a listener (
listener1
) with the allowed CIDRs (192.0.2.0/24
and198.51.100.0/24
).Example
$ openstack loadbalancer listener create --name listener1 --protocol TCP --protocol-port 80 --allowed-cidr 192.0.2.0/24 --allowed-cidr 198.51.100.0/24 lb1
Create the listener default pool (
pool1
).Example
In this example, a pool is created that uses a private subnet containing back-end servers that are configured with a custom application on TCP port 80:
$ openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol TCP
Create a health monitor on the pool that connects to the back-end servers and tests the path (
/
).Health checks are recommended but not required. If no health monitor is defined, the member server is assumed to be
ONLINE
.Example
$ openstack loadbalancer healthmonitor create --name healthmon1 \ --delay 15 --max-retries 4 --timeout 10 --type HTTP --url-path / pool1
Add load balancer members (
192.0.2.10
and192.0.2.11
) on the private subnet (private_subnet
) to the default pool.Example
In this example, the back-end servers,
192.0.2.10
and192.0.2.11
, are namedmember1
andmember2
, respectively:$ openstack loadbalancer member create --subnet-id private_subnet --address 192.0.2.10 --protocol-port 80 pool1 $ openstack loadbalancer member create --subnet-id private_subnet --address 192.0.2.11 --protocol-port 80 pool1
Verification
View and verify the listener (
listener1
) settings.Example
$ openstack loadbalancer listener show listener1
Sample output
+-----------------------------+--------------------------------------+ | Field | Value | +-----------------------------+--------------------------------------+ | admin_state_up | True | | connection_limit | -1 | | created_at | 2024-01-15T11:11:09 | | default_pool_id | None | | default_tls_container_ref | None | | description | | | id | d26ba156-03c3-4051-86e8-f8997a202d8e | | insert_headers | None | | l7policies | | | loadbalancers | 2281487a-54b9-4c2a-8d95-37262ec679d6 | | name | listener1 | | operating_status | ONLINE | | project_id | 308ca9f600064f2a8b3be2d57227ef8f | | protocol | TCP | | protocol_port | 80 | | provisioning_status | ACTIVE | | sni_container_refs | [] | | timeout_client_data | 50000 | | timeout_member_connect | 5000 | | timeout_member_data | 50000 | | timeout_tcp_inspect | 0 | | updated_at | 2024-01-15T11:12:42 | | client_ca_tls_container_ref | None | | client_authentication | NONE | | client_crl_container_ref | None | | allowed_cidrs | 192.0.2.0/24 | | | 198.51.100.0/24 | +-----------------------------+--------------------------------------+
In this example the parameter,
allowed_cidrs
, is set to allow traffic only from 192.0.2.0/24 and 198.51.100.0/24.To verify that the load balancer is secure, ensure that a request to the listener from a client whose CIDR is not in the
allowed_cidrs
list; the request does not succeed.Sample output
curl: (7) Failed to connect to 203.0.113.226 port 80: Connection timed out curl: (7) Failed to connect to 203.0.113.226 port 80: Connection timed out curl: (7) Failed to connect to 203.0.113.226 port 80: Connection timed out curl: (7) Failed to connect to 203.0.113.226 port 80: Connection timed out
7.5. Creating an OVN load balancer
You can use the OpenStack client to create a load balancer that manages network traffic in your Red Hat OpenStack Services on OpenShift (RHOSO) environment. The RHOSO Load-Balancing service supports the neutron Modular Layer 2 plug-in with the Open Virtual Network mechanism driver (ML2/OVN).
Prerequisites
-
The administrator has created a project for you and has provided you with a
clouds.yaml
file for you to access the cloud. The
python-openstackclient
package resides on your workstation.$ dnf list installed python-openstackclient
The ML2/OVN provider driver must be deployed.
ImportantThe OVN provider only supports Layer 4 TCP and UDP network traffic and the
SOURCE_IP_PORT
load balancer algorithm. The OVN provider does not support health monitoring.- A shared external (public) subnet that you can reach from the internet.
Procedure
Confirm that the system
OS_CLOUD
variable is set for your cloud:$ echo $OS_CLOUD my_cloud
Reset the variable if necessary:
$ export OS_CLOUD=my_other_cloud
As an alternative, you can specify the cloud name by adding the
--os-cloud <cloud_name>
option each time you run anopenstack
command.Create a load balancer (
lb1
) on the private subnet (private_subnet
) using the--provider ovn
argument.NoteValues inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site.
Example
$ openstack loadbalancer create --name lb1 --provider ovn \ --vip-subnet-id private_subnet --wait
Create a listener (
listener1
) that uses the protocol (tcp
) on the specified port (80
) for which the custom application is configured.NoteThe OVN provider only supports Layer 4 TCP and UDP network traffic.
Example
$ openstack loadbalancer listener create --name listener1 \ --protocol tcp --protocol-port 80 lb1
Create the listener default pool (
pool1
).NoteThe only supported load-balancing algorithm for OVN is
SOURCE_IP_PORT
.Example
The command in this example creates an HTTP pool that uses a private subnet containing back-end servers that host a custom application on a specific TCP port:
$ openstack loadbalancer pool create --name pool1 --lb-algorithm \ SOURCE_IP_PORT --listener listener1 --protocol tcp
ImportantOVN does not support the health monitor feature for load-balancing.
Add the back-end servers (
192.0.2.10
and192.0.2.11
) on the private subnet (private_subnet
) to the pool.Example
In this example, the back-end servers,
192.0.2.10
and192.0.2.11
, are namedmember1
andmember2
, respectively:$ openstack loadbalancer member create --name member1 --subnet-id \ private_subnet --address 192.0.2.10 --protocol-port 80 pool1 $ openstack loadbalancer member create --name member2 --subnet-id \ private_subnet --address 192.0.2.11 --protocol-port 80 pool1
Verification
View and verify the load balancer (
lb1
) settings.Example
$ openstack loadbalancer show lb1
Sample output
+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2024-01-15T11:11:09 | | description | | | flavor | | | id | 788fe121-3dec-4e1b-8360-4020642238b0 | | listeners | 09f28053-fde8-4c78-88b9-0f191d84120e | | name | lb1 | | operating_status | ONLINE | | pools | 627842b3-eed8-4f5f-9f4a-01a738e64d6a | | project_id | dda678ca5b1241e7ad7bf7eb211a2fd7 | | provider | ovn | | provisioning_status | ACTIVE | | updated_at | 2024-01-15T11:12:42 | | vip_address | 198.51.100.11 | | vip_network_id | 9bca13be-f18d-49a5-a83d-9d487827fd16 | | vip_port_id | 69a85edd-5b1c-458f-96f2-b4552b15b8e6 | | vip_qos_policy_id | None | | vip_subnet_id | 5bd7334b-49b3-4849-b3a2-b0b83852dba1 | +---------------------+--------------------------------------+
Run the
openstack loadbalancer listener show
command to view the listener details.Example
$ openstack loadbalancer listener show listener1
Sample output
+-----------------------------+--------------------------------------+ | Field | Value | +-----------------------------+--------------------------------------+ | admin_state_up | True | | connection_limit | -1 | | created_at | 2024-01-15T11:13:52 | | default_pool_id | a5034e7a-7ddf-416f-9c42-866863def1f2 | | default_tls_container_ref | None | | description | | | id | a101caba-5573-4153-ade9-4ea63153b164 | | insert_headers | None | | l7policies | | | loadbalancers | 653b8d79-e8a4-4ddc-81b4-e3e6b42a2fe3 | | name | listener1 | | operating_status | ONLINE | | project_id | 7982a874623944d2a1b54fac9fe46f0b | | protocol | TCP | | protocol_port | 64015 | | provisioning_status | ACTIVE | | sni_container_refs | [] | | timeout_client_data | 50000 | | timeout_member_connect | 5000 | | timeout_member_data | 50000 | | timeout_tcp_inspect | 0 | | updated_at | 2024-01-15T11:15:17 | | client_ca_tls_container_ref | None | | client_authentication | NONE | | client_crl_container_ref | None | | allowed_cidrs | None | +-----------------------------+--------------------------------------+
Run the
openstack loadbalancer pool show
command to view the pool (pool1
) and load-balancer members.Example
$ openstack loadbalancer pool show pool1
Sample output
+----------------------+--------------------------------------+ | Field | Value | +----------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2024-01-15T11:17:34 | | description | | | healthmonitor_id | | | id | a5034e7a-7ddf-416f-9c42-866863def1f2 | | lb_algorithm | SOURCE_IP_PORT | | listeners | a101caba-5573-4153-ade9-4ea63153b164 | | loadbalancers | 653b8d79-e8a4-4ddc-81b4-e3e6b42a2fe3 | | members | 90d69170-2f73-4bfd-ad31-896191088f59 | | name | pool1 | | operating_status | ONLINE | | project_id | 7982a874623944d2a1b54fac9fe46f0b | | protocol | TCP | | provisioning_status | ACTIVE | | session_persistence | None | | updated_at | 2024-01-15T11:18:59 | | tls_container_ref | None | | ca_tls_container_ref | None | | crl_container_ref | None | | tls_enabled | False | +----------------------+--------------------------------------+
Chapter 8. Implementing layer 7 load balancing
This content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
In Red Hat OpenStack Services on OpenShift (RHOSO) environments, you can use the RHOSO Load-balancing service (octavia) with layer 7 policies to redirect HTTP requests to particular application server pools by using several criteria to meet your business needs:
- Section 8.1, “About layer 7 load balancing”
- Section 8.2, “Layer 7 load balancing in the Load-balancing service”
- Section 8.3, “Layer 7 load-balancing rules”
- Section 8.4, “Layer 7 load-balancing rule types”
- Section 8.5, “Layer 7 load-balancing rule comparison types”
- Section 8.6, “Layer 7 load-balancing rule result inversion”
- Section 8.7, “Layer 7 load-balancing policies”
- Section 8.8, “Layer 7 load-balancing policy logic”
- Section 8.9, “Layer 7 load-balancing policy actions”
- Section 8.10, “Layer 7 load-balancing policy position”
- Section 8.11, “Redirecting unsecure HTTP requests to secure HTTP”
- Section 8.12, “Redirecting requests based on the starting path to a pool”
- Section 8.13, “Sending subdomain requests to a specific pool”
- Section 8.14, “Sending requests based on the host name ending to a specific pool”
- Section 8.15, “Sending requests based on absence of a browser cookie to a specific pool”
- Section 8.16, “Sending requests based on absence of a browser cookie or invalid cookie value to a specific pool”
- Section 8.17, “Sending requests to a pool whose name matches the hostname and path”
- Section 8.18, “Configuring A-B testing on an existing production site by using a cookie”
8.1. About layer 7 load balancing
Layer 7 (L7) load balancing takes its name from the Open Systems Interconnection (OSI) model, indicating that the load balancer distributes requests to back end application server pools based on layer 7 (application) data. The following are different terms that all mean L7 load balancing: request switching, application load balancing, and content-based- routing, switching, or balancing. The Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) provides robust support for L7 load balancing.
You cannot create L7 policies and rules with UDP load balancers.
An L7 load balancer consists of a listener that accepts requests on behalf of a number of back end pools and distributes those requests based on policies that use application data to determine which pools service any given request. This allows for the application infrastructure to be specifically tuned and optimized to serve specific types of content. For example, you can tune one group of back end servers (a pool) to serve only images; another for execution of server-side scripting languages like PHP and ASP; and another for static content such as HTML, CSS, and JavaScript.
Unlike lower-level load balancing, L7 load balancing does not require that all pools behind the load balancing service have the same content. L7 load balancers can direct requests based on URI, host, HTTP headers, and other data in the application message.
8.2. Layer 7 load balancing in the Load-balancing service
Although you can implement layer 7 (L7) load balancing for any well-defined L7 application interface, L7 functionality for the Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) refers only to the HTTP
and TERMINATED_HTTPS
protocols and its semantics.
Neutron LBaaS and the Load-balancing service use L7 rules and policies for the logic of L7 load balancing. An L7 rule is a single, simple logical test that evaluates to true or false. An L7 policy is a collection of L7 rules and a defined action to take if all the rules associated with the policy match.
8.3. Layer 7 load-balancing rules
For the Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia), a layer 7 (L7) load-balancing rule is a single, simple logical test that returns either true or false. It consists of a rule type, a comparison type, a value, and an optional key that is used depending on the rule type. An L7 rule must always be associated with an L7 policy.
You cannot create L7 policies and rules with UDP load balancers.
Additional resources
8.4. Layer 7 load-balancing rule types
The Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) has the following types of layer 7 load-balancing rules:
-
HOST_NAME
: The rule compares the HTTP/1.1 hostname in the request against the value parameter in the rule. -
PATH
: The rule compares the path portion of the HTTP URI against the value parameter in the rule. -
FILE_TYPE
: The rule compares the last portion of the URI against the value parameter in the rule, for example, txt, jpg, and so on. -
HEADER
: The rule looks for a header defined in the key parameter and compares it against the value parameter in the rule. -
COOKIE
: The rule looks for a cookie named by the key parameter and compares it against the value parameter in the rule. -
SSL_CONN_HAS_CERT
: The rule matches if the client has presented a certificate for TLS client authentication. This does not imply that the certificate is valid. -
SSL_VERIFY_RESULT
: This rule matches the TLS client authentication certificate validation result. A value of zero (0
) means the certificate was successfully validated. A value greater than zero means the certificate failed validation. This value follows theopenssl-verify
result codes. -
SSL_DN_FIELD
: The rule looks for aDistinguished Name
field defined in the key parameter and compares it against the value parameter in the rule.
Additional resources
8.5. Layer 7 load-balancing rule comparison types
For the Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia), layer 7 load-balancing rules of a given type always perform comparisons. The Load-balancing service supports the following types of comparisons. Not all rule types support all comparison types:
-
REGEX
: Perl type regular expression matching -
STARTS_WITH
: String starts with -
ENDS_WITH
: String ends with -
CONTAINS
: String contains -
EQUAL_TO
: String is equal to
Additional resources
8.6. Layer 7 load-balancing rule result inversion
To more fully express the logic that some policies require and the Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) uses, layer 7 load-balancing rules can have their result inverted. If the invert parameter of a given rule is true, the result of its comparison is inverted.
For example, an inverted equal to rule effectively becomes a not equal to rule. An inverted regex rule returns true
only if the given regular expression does not match.
Additional resources
8.7. Layer 7 load-balancing policies
For the Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia), a layer 7 (L7) load-balancing policy is a collection of L7 rules associated with a listener, and which might also have an association to a back end pool. Policies are actions that the load balancer takes if all of the rules in the policy are true.
You cannot create L7 policies and rules with UDP load balancers.
Additional resources
8.8. Layer 7 load-balancing policy logic
The Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia), layer 7 load-balancing policy uses the following logic: all the rules associated with a given policy are logically AND-ed together. A request must match all of the policy rules to match the policy.
If you need to express a logical OR operation between rules, create multiple policies with the same action or, make a more elaborate regular expression).
Additional resources
8.9. Layer 7 load-balancing policy actions
If the layer 7 load-balancing policy matches a given request, then that policy action is executed. The following are the actions an L7 policy might take:
-
REJECT
: The request is denied with an appropriate response code, and not forwarded on to any back end pool. -
REDIRECT_TO_URL
: The request is sent an HTTP redirect to the URL defined in the redirect_url parameter. -
REDIRECT_PREFIX
: Requests matching this policy are redirected to this prefix URL. -
REDIRECT_TO_POOL
: The request is forwarded to the back-end pool associated with the L7 policy.
Additional resources
8.10. Layer 7 load-balancing policy position
For the Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia), when multiple layer 7 (L7) load-balancing policies are associated with a listener, then the value of the policy position parameter becomes important. The position parameter is used when determining the order that L7 policies are evaluated. The policy position affects listener behavior in the following ways:
In the reference implementation of the Load-balancing service (haproxy amphorae), HAProxy enforces the following ordering regarding policy actions:
-
REJECT
policies take precedence over all other policies. -
REDIRECT_TO_URL
policies take precedence over REDIRECT_TO_POOL policies. -
REDIRECT_TO_POOL
policies are evaluated only after all of the above, and in the order that the position of the policy specifies.
-
- L7 policies are evaluated in a specific order, as defined by the position attribute, and the first policy that matches a given request is the one whose action is followed.
- If no policy matches a given request, then the request is routed to the listener’s default pool, if it exists. If the listener has no default pool, then an error 503 is returned.
-
Policy position numbering starts with one (
1
). - If a new policy is created with a position that matches that of an existing policy, then the new policy is inserted at the given position.
- If a new policy is created without specifying a position, or specifying a position that is greater than the number of policies already in the list, the new policy is appended to the list.
-
When policies are inserted, deleted, or appended to the list, the policy position values are re-ordered from one (
1
) without skipping numbers. For example, if policy A, B, and C have position values of1
,2
and3
respectively, if you delete policy B from the list, the position for policy C becomes2
.
Additional resources
8.11. Redirecting unsecure HTTP requests to secure HTTP
You can use the Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) with layer 7 (L7) policies to redirect HTTP requests that are received on a non-secure TCP port to a secure TCP port.
In this example, any HTTP requests that arrive on the unsecure TCP port, 80, are redirected to the secure TCP port, 443.
Prerequisites
include::../global/shared-snippets.adoc[tag=cloud_user_prereqs
-
A TLS-terminated HTTPS load balancer (
lb1
) that has a listener (listener1
) and a pool (pool1
). For more information, see Creating a TLS-terminated HTTPS load balancer.
Procedure
Confirm that the system
OS_CLOUD
variable is set for your cloud:$ echo $OS_CLOUD my_cloud
Reset the variable if necessary:
$ export OS_CLOUD=my_other_cloud
As an alternative, you can specify the cloud name by adding the
--os-cloud <cloud_name>
option each time you run anopenstack
command.Create an HTTP listener (
http_listener
) on a load balancer (lb1
) port (80
).NoteValues inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site.
Example
$ openstack loadbalancer listener create --name http_listener \ --protocol HTTP --protocol-port 80 lb1
Create an L7 policy (
policy1
) on the listener (http_listener
). The policy must contain the action (REDIRECT_TO_URL
) and point to the URL (https://www.example.com/
).Example
$ openstack loadbalancer l7policy create --name policy1 \ --action REDIRECT_PREFIX --redirect-prefix https://www.example.com/ \ http_listener
Add an L7 rule that matches all requests to a policy (
policy1
).Example
$ openstack loadbalancer l7rule create --compare-type STARTS_WITH \ --type PATH --value / policy1
Verification
-
Run the
openstack loadbalancer l7policy list
command and verify that the policy,policy1
, exists. Run the
openstack loadbalancer l7rule list <l7policy>
command and verify that a rule with acompare_type
ofSTARTS_WITH
exists.Example
$ openstack loadbalancer l7rule list policy1
8.12. Redirecting requests based on the starting path to a pool
You can use the Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) to redirect HTTP requests to an alternate pool of servers. You can define a layer 7 (L7) policy to match one or more starting paths in the URL of the request.
In this example, any requests that contain URLs that begin with /js
or /images
are redirected to an alternate pool of static content servers.
Prerequisites
-
The administrator has created a project for you and has provided you with a
clouds.yaml
file for you to access the cloud. The
python-openstackclient
package resides on your workstation.$ dnf list installed python-openstackclient
-
An HTTP load balancer (
lb1
) that has a listener (listener1
) and a pool (pool1
). For more information, see Creating an HTTP load balancer with a health monitor.
Procedure
Confirm that the system
OS_CLOUD
variable is set for your cloud:$ echo $OS_CLOUD my_cloud
Reset the variable if necessary:
$ export OS_CLOUD=my_other_cloud
As an alternative, you can specify the cloud name by adding the
--os-cloud <cloud_name>
option each time you run anopenstack
command.Create a second pool (
static_pool
) on a load balancer (lb1
).NoteValues inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site.
Example
$ openstack loadbalancer pool create --name static_pool \ --lb-algorithm ROUND_ROBIN --loadbalancer lb1 --protocol HTTP
Add load balancer members (
192.0.2.10
and192.0.2.11
) on the private subnet (private_subnet
) to the pool (static_pool
):Example
In this example, the back-end servers,
192.0.2.10
and192.0.2.11
, are namedmember1
andmember2
, respectively:$ openstack loadbalancer member create --name member1 --subnet-id \ private_subnet --address 192.0.2.10 --protocol-port 80 static_pool $ openstack loadbalancer member create --name member2 --subnet-id \ private_subnet --address 192.0.2.11 --protocol-port 80 static_pool
Create an L7 policy (
policy1
) on the listener (listener1
). The policy must contain the action (REDIRECT_TO_POOL
) and point to the pool (static_pool
).Example
$ openstack loadbalancer l7policy create --name policy1 \ --action REDIRECT_TO_POOL --redirect-pool static_pool listener1
Add an L7 rule that looks for
/js
at the start of the request path to the policy.Example
$ openstack loadbalancer l7rule create --compare-type STARTS_WITH \ --type PATH --value /js policy1
Create an L7 policy (
policy2
) with an action (REDIRECT_TO_POOL
) and add the listener (listener1
) pointed at the pool.Example
$ openstack loadbalancer l7policy create --name policy2 \ --action REDIRECT_TO_POOL --redirect-pool static_pool listener1
Add an L7 rule that looks for
/images
at the start of the request path to the policy.Example
$ openstack loadbalancer l7rule create --compare-type STARTS_WITH \ --type PATH --value /images policy2
Verification
-
Run the
openstack loadbalancer l7policy list
command and verify that the policies,policy1
andpolicy2
, exist. Run the
openstack loadbalancer l7rule list <l7policy>
command and verify that a rule with acompare_type
ofSTARTS_WITH
exists for each respective policy.Example
$ openstack loadbalancer l7rule list policy1 $ openstack loadbalancer l7rule list policy2
8.13. Sending subdomain requests to a specific pool
You can use the Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) with layer 7 (L7) policies to redirect requests containing a specific HTTP/1.1 hostname to a different pool of application servers.
In this example, any requests that contain the HTTP/1.1 hostname, www2.example.com
, are redirected to an alternate pool application servers, pool2
.
Prerequisites
-
The administrator has created a project for you and has provided you with a
clouds.yaml
file for you to access the cloud. The
python-openstackclient
package resides on your workstation.$ dnf list installed python-openstackclient
An HTTP load balancer (
lb1
) that has a listener (listener1
) and a pool (pool1
).For more information, see Creating an HTTP load balancer with a health monitor.
Procedure
Confirm that the system
OS_CLOUD
variable is set for your cloud:$ echo $OS_CLOUD my_cloud
Reset the variable if necessary:
$ export OS_CLOUD=my_other_cloud
As an alternative, you can specify the cloud name by adding the
--os-cloud <cloud_name>
option each time you run anopenstack
command.Create a second pool (
pool2
) on the load balancer (lb1
).NoteValues inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site.
Example
$ openstack loadbalancer pool create --name pool2 \ --lb-algorithm ROUND_ROBIN --loadbalancer lb1 --protocol HTTP
Create an L7 policy (
policy1
) on the listener (listener1
). The policy must contain the action (REDIRECT_TO_POOL
) and point to the pool (pool2
).Example
$ openstack loadbalancer l7policy create --name policy1 \ --action REDIRECT_TO_POOL --redirect-pool pool2 listener1
Add an L7 rule to the policy that sends any requests using the HTTP/1.1 hostname, www2.example.com, to the second pool (
pool2
).Example
$ openstack loadbalancer l7rule create --compare-type EQUAL_TO \ --type HOST_NAME --value www2.example.com policy1
Verification
-
Run the
openstack loadbalancer l7policy list
command and verify that the policy,policy1
, exists. Run the
openstack loadbalancer l7rule list <l7policy>
command and verify that a rule with acompare_type
ofEQUAL_TO
exists for the policy.Example
$ openstack loadbalancer l7rule list policy1
8.14. Sending requests based on the host name ending to a specific pool
You can use the Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) with layer 7 (L7) policies to redirect requests containing an HTTP/1.1 hostname that ends in a specific string to a different pool of application servers.
In this example, any requests that contain an HTTP/1.1 hostname that ends with, .example.com
, are redirected to an alternate pool application server, pool2
.
Prerequisites
-
The administrator has created a project for you and has provided you with a
clouds.yaml
file for you to access the cloud. The
python-openstackclient
package resides on your workstation.$ dnf list installed python-openstackclient
An HTTP load balancer (
lb1
) that has a listener (listener1
) and a pool (pool1
).For more information, see Creating an HTTP load balancer with a health monitor
Procedure
Confirm that the system
OS_CLOUD
variable is set for your cloud:$ echo $OS_CLOUD my_cloud
Reset the variable if necessary:
$ export OS_CLOUD=my_other_cloud
As an alternative, you can specify the cloud name by adding the
--os-cloud <cloud_name>
option each time you run anopenstack
command.Create a second pool (
pool2
) on the load balancer (lb1
).NoteValues inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site.
Example
$ openstack loadbalancer pool create --name pool2 \ --lb-algorithm ROUND_ROBIN --loadbalancer lb1 --protocol HTTP
Create an L7 policy (
policy1
) on the listener (listener1
). The policy must contain the action (REDIRECT_TO_POOL
) and point to the pool (pool2
).Example
$ openstack loadbalancer l7policy create --name policy1 \ --action REDIRECT_TO_POOL --redirect-pool pool2 listener1
Add an L7 rule to the policy that sends any requests that use an HTTP/1.1 hostname (
www2.example.com
) to the second pool (pool2
).Example
$ openstack loadbalancer l7rule create --compare-type ENDS_WITH \ --type HOST_NAME --value .example.com policy1
Verification
-
Run the
openstack loadbalancer l7policy list
command and verify that the policy,policy1
, exists. Run the
openstack loadbalancer l7rule list <l7policy>
command and verify that a rule with acompare_type
ofEQUAL_TO
exists for the policy.Example
$ openstack loadbalancer l7rule list policy1
8.15. Sending requests based on absence of a browser cookie to a specific pool
You can use the Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) to redirect unauthenticated web client requests to a different pool that contains one or more authentication servers. A layer 7 (L7) policy determines whether the incoming request is missing an authentication cookie.
In this example, any web client requests that lack the browser cookie, auth_token
, are redirected to an alternate pool that contains an authentication server.
This procedure provides an example for how to perform L7 application routing by using a browser cookie, and does not address security concerns.
Prerequisites
-
The administrator has created a project for you and has provided you with a
clouds.yaml
file for you to access the cloud. The
python-openstackclient
package resides on your workstation.$ dnf list installed python-openstackclient
A TLS-terminated HTTPS load balancer (
lb1
) that has a listener (listener1
) and a pool (pool1
).For more information, see Creating a TLS-terminated HTTPS load balancer.
- A second Networking service (neutron) subnet on which a secure authentication server authenticates web users.
Procedure
Confirm that the system
OS_CLOUD
variable is set for your cloud:$ echo $OS_CLOUD my_cloud
Reset the variable if necessary:
$ export OS_CLOUD=my_other_cloud
As an alternative, you can specify the cloud name by adding the
--os-cloud <cloud_name>
option each time you run anopenstack
command.Create a second pool (
login_pool
) on the load balancer (lb1
).NoteValues inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site.
Example
$ openstack loadbalancer pool create --name login_pool \ --lb-algorithm ROUND_ROBIN --loadbalancer lb1 --protocol HTTP
Add a member, the secure authentication server (
192.0.2.10
) on the authenticating subnet (secure_subnet
), to the second pool.Example
In this example, the back-end server,
192.0.2.10
, is namedmember1
:$ openstack loadbalancer member create --name member1 \ --address 192.0.2.10 --protocol-port 80 --subnet-id secure_subnet \ login_pool
Create an L7 policy (
policy1
) on the listener (listener1
). The policy must contain the action (REDIRECT_TO_POOL
) and point to the second pool (login_pool
).Example
$ openstack loadbalancer l7policy create --name policy1 \ --action REDIRECT_TO_POOL --redirect-pool login_pool listener1
Add an L7 rule to the policy (
policy1
) that searches for the browser cookie (auth_token
) with any value, and matches if the cookie is NOT present.Example
$ openstack loadbalancer l7rule create --compare-type REGEX \ --key auth_token --type COOKIE --value '.*' --invert policy1
Verification
-
Run the
openstack loadbalancer l7policy list
command and verify that the policy,policy1
, exists. Run the
openstack loadbalancer l7rule list <l7policy>
command and verify that a rule with acompare_type
ofREGEX
exists.Example
$ openstack loadbalancer l7rule list policy1
8.16. Sending requests based on absence of a browser cookie or invalid cookie value to a specific pool
You can use the Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) to redirect unauthenticated web client requests to a different pool that contains one or more authentication servers. A layer 7 (L7) policy determines whether the incoming request is missing an authentication cookie or contains an authentication cookie with a particular value.
In this example, any web client requests that either lacks the browser cookie, auth_token
, or has auth_token
with a value of INVALID
, are redirected to an alternate pool that contains an authentication server.
This procedure provides an example for how to perform L7 application routing using a browser cookie, and does not address security concerns.
Prerequisites
-
The administrator has created a project for you and has provided you with a
clouds.yaml
file for you to access the cloud. The
python-openstackclient
package resides on your workstation.$ dnf list installed python-openstackclient
A TLS-terminated HTTPS load balancer (
lb1
) that has a listener (listener1
) and a pool (pool1
).For more information, see Creating a TLS-terminated HTTPS load balancer.
- A second Networking service (neutron) subnet on which a secure authentication server authenticates web users.
Procedure
Confirm that the system
OS_CLOUD
variable is set for your cloud:$ echo $OS_CLOUD my_cloud
Reset the variable if necessary:
$ export OS_CLOUD=my_other_cloud
As an alternative, you can specify the cloud name by adding the
--os-cloud <cloud_name>
option each time you run anopenstack
command.Create a second pool (
login_pool
) on the load balancer (lb1
).NoteValues inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site.
Example
$ openstack loadbalancer pool create --name login_pool \ --lb-algorithm ROUND_ROBIN --loadbalancer lb1 --protocol HTTP
Add a member, the secure authentication server (
192.0.2.10
) on the authenticating subnet (secure_subnet
), to the second pool.Example
In this example, the back-end server,
192.0.2.10
, is namedmember1
:$ openstack loadbalancer member create --name member1 \ --address 192.0.2.10 --protocol-port 80 --subnet-id secure_subnet \ login_pool
Create an L7 policy (
policy1
) on the listener (listener1
). The policy must contain the action (REDIRECT_TO_POOL
) and point to the second pool (login_pool
).Example
$ openstack loadbalancer l7policy create --action REDIRECT_TO_POOL \ --redirect-pool login_pool --name policy1 listener1
Add an L7 rule to the policy (
policy1
) that searches for the browser cookie (auth_token
) with any value, and matches if the cookie is NOT present.Example
$ openstack loadbalancer l7rule create --compare-type REGEX \ --key auth_token --type COOKIE --value '.*' --invert policy1
Create a second L7 policy (
policy2
) on the listener (listener1
). The policy must contain the action (REDIRECT_TO_POOL
) and point to the second pool (login_pool
).Example
$ openstack loadbalancer l7policy create --action REDIRECT_TO_POOL \ --redirect-pool login_pool --name policy2 listener1
Add an L7 rule to the second policy (
policy2
) that searches for the browser cookie (auth_token
) and matches if the cookie value equals the stringINVALID
.Example
$ openstack loadbalancer l7rule create --compare-type EQUAL_TO \ --key auth_token --type COOKIE --value INVALID policy2
Verification
-
Run the
openstack loadbalancer l7policy list
command and verify that the policies,policy1
andpolicy2
, exist. Run the
openstack loadbalancer l7rule list <l7policy>
command and verify that a rule with acompare_type
ofREGEX
exists forpolicy1
and a rule with acompare_type
ofEQUAL_TO
exists forpolicy2
.Example
$ openstack loadbalancer l7rule list policy1 $ openstack loadbalancer l7rule list policy2
8.17. Sending requests to a pool whose name matches the hostname and path
You can use the Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) to redirect web client requests that match certain criteria to an alternate pool of application servers. The business logic criteria is performed through a layer 7 (L7) policy that attempts to match a predefined hostname and request path.
In this example, any web client requests that either match the hostname api.example.com
, and have /api
at the start of the request path are redirected to an alternate pool, api_pool
.
Prerequisites
-
The administrator has created a project for you and has provided you with a
clouds.yaml
file for you to access the cloud. The
python-openstackclient
package resides on your workstation.$ dnf list installed python-openstackclient
-
An HTTP load balancer (
lb1
) that has a listener (listener1
) and a pool (pool1
). For more information, see Creating an HTTP load balancer with a health monitor.
Procedure
Confirm that the system
OS_CLOUD
variable is set for your cloud:$ echo $OS_CLOUD my_cloud
Reset the variable if necessary:
$ export OS_CLOUD=my_other_cloud
As an alternative, you can specify the cloud name by adding the
--os-cloud <cloud_name>
option each time you run anopenstack
command.Create a second pool (
api_pool
) on the load balancer (lb1
).NoteValues inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site.
Example
$ openstack loadbalancer pool create --name api_pool \ --lb-algorithm ROUND_ROBIN --loadbalancer lb1 --protocol HTTP
Add load balancer members (
192.0.2.10
and192.0.2.11
) on the private subnet (private_subnet
) to the pool (static_pool
):Example
In this example, the back-end servers,
192.0.2.10
and192.0.2.11
, are namedmember1
andmember2
, respectively:$ openstack loadbalancer member create --name member1 --subnet-id \ private_subnet --address 192.0.2.10 --protocol-port 80 static_pool $ openstack loadbalancer member create --name member2 --subnet-id \ private_subnet --address 192.0.2.11 --protocol-port 80 static_pool
Create an L7 policy (
policy1
) on the listener (listener1
). The policy must contain the action (REDIRECT_TO_POOL
) and point to the pool (api_pool
).Example
$ openstack loadbalancer l7policy create --action REDIRECT_TO_POOL \ --redirect-pool api_pool --name policy1 listener1
Add an L7 rule to the policy that matches the hostname
api.example.com
.Example
$ openstack loadbalancer l7rule create --compare-type EQUAL_TO \ --type HOST_NAME --value api.example.com policy1
Add a second L7 rule to the policy that matches
/api
at the start of the request path.This rule is logically ANDed with the first rule.
Example
$ openstack loadbalancer l7rule create --compare-type STARTS_WITH \ --type PATH --value /api policy1
Verification
-
Run the
openstack loadbalancer l7policy list
command and verify that the policy,policy1
, exists. Run the
openstack loadbalancer l7rule list <l7policy>
command and verify that rules with acompare_type
ofEQUAL_TO
andSTARTS_WITH
, respectively, both exist forpolicy1
.Example
$ openstack loadbalancer l7rule list policy1 $ openstack loadbalancer l7rule list policy2
8.18. Configuring A-B testing on an existing production site by using a cookie
You can use the Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) with layer 7 (L7) policies to configure A-B testing, or split testing, for your production websites.
In this example, web clients that are routed to the “B” version of the website set the cookie site_version
to B
by the member servers in the pool (pool1
).
Prerequisites
-
The administrator has created a project for you and has provided you with a
clouds.yaml
file for you to access the cloud. The
python-openstackclient
package resides on your workstation.$ dnf list installed python-openstackclient
- Two production websites (site A and site B).
You have configured an HTTP load balancer following the instructions for "Redirecting requests based on the starting path to a pool." A summary of the required configuration is:
-
Listener (
listener1
) on load balancer (lb1
). -
HTTP requests with a URL that starts with either
/js
or/images
are sent to a pool (static_pool
). -
All other requests are sent to the listener default pool (
pool1
). - For more information about the configuration, see Section 8.12, “Redirecting requests based on the starting path to a pool”.
-
Listener (
Procedure
Confirm that the system
OS_CLOUD
variable is set for your cloud:$ echo $OS_CLOUD my_cloud
Reset the variable if necessary:
$ export OS_CLOUD=my_other_cloud
As an alternative, you can specify the cloud name by adding the
--os-cloud <cloud_name>
option each time you run anopenstack
command.Create a third pool (
pool_B
) on a load balancer (lb1
).NoteValues inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site.
Example
$ openstack loadbalancer pool create --name pool_B \ --lb-algorithm ROUND_ROBIN --loadbalancer lb1 --protocol HTTP
Add load balancer members (
192.0.2.50
and192.0.2.51
) on the private subnet (private_subnet
) to the pool (pool_B
):Example
In this example, the back-end servers,
192.0.2.50
and192.0.2.51
, are namedmember1
andmember2
, respectively:$ openstack loadbalancer member create --name member1 \ --address 192.0.2.50 --protocol-port 80 \ --subnet-id private_subnet pool_B $ openstack loadbalancer member create --name member2 \ --address 192.0.2.51 --protocol-port 80 \ --subnet-id private_subnet pool_B
Create a fourth pool (
static_pool_B
) on a load balancer (lb1
).Example
$ openstack loadbalancer pool create --name static_pool_B \ --lb-algorithm ROUND_ROBIN --loadbalancer lb1 --protocol HTTP
Add load balancer members (
192.0.2.100
and192.0.2.101
) on the private subnet (private_subnet
) to the pool (static_pool_B
):Example
$ openstack loadbalancer member create --name member3 \ --address 192.0.2.100 --protocol-port 80 \ --subnet-id private_subnet static_pool_B $ openstack loadbalancer member create --name member4 \ --address 192.0.2.101 --protocol-port 80 \ --subnet-id private_subnet static_pool_B
Create an L7 policy (
policy2
) on the listener (listener1
). The policy must contain the action (REDIRECT_TO_POOL
) and point to the pool (static_pool_B
). Insert the policy at position1
.Example
$ openstack loadbalancer l7policy create --action REDIRECT_TO_POOL \ --redirect-pool static_pool_B --name policy2 --position 1 listener1
Add an L7 rule to the policy (
policy2
) that uses a regular expression to match either/js
or/images
at the start of the request path.Example
$ openstack loadbalancer l7rule create --compare-type REGEX \ --type PATH --value '^/(js|images)' policy2
Add a second L7 rule to the policy (
policy2
) that matches the cookie (site_version
) to the exact string (B
).Example
$ openstack loadbalancer l7rule create --compare-type EQUAL_TO \ --key site_version --type COOKIE --value B policy2
Create an L7 policy (
policy3
) on the listener (listener1
). The policy must contain the action (REDIRECT_TO_POOL
) and point to the pool (pool_B
). Insert the policy at position2
.Example
$ openstack loadbalancer l7policy create --action REDIRECT_TO_POOL \ --redirect-pool pool_B --name policy3 --position 2 listener1
Add an L7 rule to the policy (
policy3
) that matches the cookie (site_version
) to the exact string (B
).Example
$ openstack loadbalancer l7rule create --compare-type EQUAL_TO \ --key site_version --type COOKIE --value B policy3
NoteIt is important to assign L7 policies with the most specific rules to a lower position, because the first policy whose rules all evaluate to True is the policy whose action is followed. In this procedure,
policy2
needs to be evaluated beforepolicy3
to avoid requests being sent to the incorrect pool.
Verification
-
Run the
openstack loadbalancer l7policy list
command and verify that the policies,policy2
andpolicy3
, exist. Run the
openstack loadbalancer l7rule list <l7policy>
command and verify that a rule with acompare_type
ofSTARTS_WITH
exists for each respective policy.Example
$ openstack loadbalancer l7rule list policy2 $ openstack loadbalancer l7rule list policy3
Chapter 9. Grouping Load-balancing service objects by using tags
This content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
Tags are are arbitrary strings that you can add to Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) objects for the purpose of classifying them into groups. Tags do not affect the functionality of load-balancing objects: load balancers, listeners, pools, members, health monitors, rules, and polices. You can add a tag when you create the object, or add or remove a tag after the object has been created.
By associating a particular tag with load-balancing objects, you can run list commands to filter objects that belong to one or more groups. Being able to filter objects into one or more groups can be a starting point in managing usage, allocation, and maintenance of your load-balancing service resources. The ability to tag objects can also be leveraged by automated configuration management tools.
The topics included in this section are:
9.1. Adding tags when creating Load-balancing service objects
You can add a tag of your choice when you create a Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) object. When the tags are in place, you can filter load balancers, listeners, pools, members, health monitors, rules, and policies by using their respective loadbalancer list
commands.
Prerequisites
-
The administrator has created a project for you and has provided you with a
clouds.yaml
file for you to access the cloud. The
python-openstackclient
package resides on your workstation.$ dnf list installed python-openstackclient
Procedure
Confirm that the system
OS_CLOUD
variable is set for your cloud:$ echo $OS_CLOUD my_cloud
Reset the variable if necessary:
$ export OS_CLOUD=my_other_cloud
As an alternative, you can specify the cloud name by adding the
--os-cloud <cloud_name>
option each time you run anopenstack
command.Add a tag to a load-balancing object when you create it by using the
--tag <tag>
option with the appropriatecreate
command for the object:- openstack loadbalancer create --tag <tag> …
- openstack loadbalancer listener create --tag <tag> …
- openstack loadbalancer pool create --tag <tag> …
- openstack loadbalancer member create --tag <tag> …
- openstack loadbalancer healthmonitor create --tag <tag> …
- openstack loadbalancer l7policy create --tag <tag> …
openstack loadbalancer l7rule create --tag <tag> …
NoteA tag can be any valid unicode string with a maximum length of 255 characters.
$ openstack loadbalancer create --name lb1 \ --vip-subnet-id public_subnet --tag Finance --tag Sales
NoteLoad-balancing service objects can have one or more tags. Repeat the
--tag <tag>
option for each additional tag that you want to add.$ openstack loadbalancer listener create --name listener1 --protocol HTTP --protocol-port 80 --tag Sales lb1
$ openstack loadbalancer pool create --name pool1 \ --lb-algorithm ROUND_ROBIN --listener listener1 \ --protocol HTTP --tag Sales
$ openstack loadbalancer member create --name member1 \ --subnet-id private_subnet --address 192.0.2.10 --protocol-port 80 \ --tag Sales pool1
$ openstack loadbalancer healthmonitor create --name healthmon1 \ --delay 15 --max-retries 4 --timeout 10 --type HTTP --url-path / \ --tag Sales pool1
$ openstack loadbalancer l7policy create --action REDIRECT_PREFIX \ --redirect-prefix https://www.example.com/ \ --name policy1 http_listener --tag Sales
$ openstack loadbalancer l7rule create --compare-type STARTS_WITH \ --type PATH --value / --tag Sales policy1
Verification
Confirm that object that you created exists, and contains the tag that you added by using the appropriate
show
command for the object.Example
In this example, the
show
command is run on the loadbalancer,lb1
:$ openstack loadbalancer show lb1
Sample output
+---------------------+--------------------------------------+ | admin_state_up | True | | availability_zone | None | | created_at | 2024-08-06T19:34:15 | | description | | | flavor_id | None | | id | 7975374b-3367-4436-ab19-2d79d8c1f29b | | listeners | | | name | lb1 | | operating_status | ONLINE | | pools | | | project_id | 2eee3b86ca404cdd977281dac385fd4e | | provider | amphora | | provisioning_status | ACTIVE | | updated_at | 2024-08-07T13:30:17 | | vip_address | 172.24.3.76 | | vip_network_id | 4c241fc4-95eb-491a-affe-26c53a8805cd | | vip_port_id | 9978a598-cc34-47f7-ba28-49431d570fd1 | | vip_qos_policy_id | None | | vip_subnet_id | e999d323-bd0f-4469-974f-7f66d427e507 | | tags | Finance | | | Sales | +---------------------+--------------------------------------+
9.2. Adding or removing tags on pre-existing Load-balancing service objects
You can add and remove tags of your choice on Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) objects after they have been created. When the tags are in place, you can filter load balancers, listeners, pools, members, health monitors, rules, and polices by using the their respective loadbalancer list
commands.
You can create a new security group to apply to instances and ports within a project in a RHOSO environment.
Prerequisites
-
The administrator has created a project for you and has provided you with a
clouds.yaml
file for you to access the cloud. The
python-openstackclient
package resides on your workstation.$ dnf list installed python-openstackclient
Procedure
Confirm that the system
OS_CLOUD
variable is set for your cloud:$ echo $OS_CLOUD my_cloud
Reset the variable if necessary:
$ export OS_CLOUD=my_other_cloud
As an alternative, you can specify the cloud name by adding the
--os-cloud <cloud_name>
option each time you run anopenstack
command.Do one of the following:
Add a tag to a pre-existing load-balancing object by using the
--tag <tag>
option with the appropriateset
command for the object:-
openstack loadbalancer set --tag <tag> <load_balancer_name_or_ID>
-
openstack loadbalancer listener set --tag <tag> <listener_name_or_ID>
-
openstack loadbalancer pool set --tag <tag> <pool_name_or_ID>
-
openstack loadbalancer member set --tag <tag> <pool_name_or_ID> <member_name_or_ID>
-
openstack loadbalancer healthmonitor set --tag <tag> <healthmon_name_or_ID>
-
openstack loadbalancer l7policy set --tag <tag> <l7policy_name_or_ID>
openstack loadbalancer l7rule set --tag <tag> <l7policy_name_or_ID> <l7rule_ID>
NoteA tag can be any valid unicode string with a maximum length of 255 characters.
Example
In this example the tags,
Finance
andSales
, are added to the load balancer,lb1
:$ openstack loadbalancer set --tag Finance --tag Sales lb1
NoteLoad-balancing service objects can have one or more tags. Repeat the
--tag <tag>
option for each additional tag that you want to add.
-
Remove a tag from a pre-existing load-balancing object by using the
--tag <tag>
option with the appropriateunset
command for the object:-
openstack loadbalancer unset --tag <tag> <load_balancer_name_or_ID>
-
openstack loadbalancer listener unset --tag <tag> <listener_name_or_ID>
-
openstack loadbalancer pool unset --tag <tag> <pool_name_or_ID>
-
openstack loadbalancer member unset --tag <tag> <pool_name_or_ID> <member_name_or_ID>
-
openstack loadbalancer healthmonitor unset --tag <tag> <healthmon_name_or_ID>
-
openstack loadbalancer l7policy unset --tag <tag> <policy_name_or_ID>
openstack loadbalancer l7rule unset --tag <tag> <policy_name_or_ID> <l7rule_ID>
Example
In this example, the tag,
Sales
, is removed from the load balancer,lb1
:$ openstack loadbalancer unset --tag Sales lb1
-
Remove all tags from a pre-existing load-balancing object by using the
--no-tag
option with the appropriateset
command for the object:-
openstack loadbalancer set --no-tag <load_balancer_name_or_ID>
-
openstack loadbalancer listener set --no-tag <listener_name_or_ID>
-
openstack loadbalancer pool set --no-tag <pool_name_or_ID>
-
openstack loadbalancer member set --no-tag <pool_name_or_ID> <member_name_or_ID>
-
openstack loadbalancer healthmonitor set --no-tag <healthmon_name_or_ID>
-
openstack loadbalancer l7policy set --no-tag <l7policy_name_or_ID>
openstack loadbalancer l7rule set --no-tag <l7policy_name_or_ID> <l7rule_ID>
Example
In this example, all tags are removed from the load balancer,
lb1
:$ openstack loadbalancer set --no-tag lb1
-
Verification
Confirm that you have added or removed one or more tags on the load-balancing object, by using the appropriate
show
command for the object.Example
In this example, the
show
command is run on the loadbalancer,lb1
:$ openstack loadbalancer show lb1
Sample output
+---------------------+--------------------------------------+ | admin_state_up | True | | availability_zone | None | | created_at | 2024-08-06T19:34:15 | | description | | | flavor_id | None | | id | 7975374b-3367-4436-ab19-2d79d8c1f29b | | listeners | | | name | lb1 | | operating_status | ONLINE | | pools | | | project_id | 2eee3b86ca404cdd977281dac385fd4e | | provider | amphora | | provisioning_status | ACTIVE | | updated_at | 2024-08-07T13:30:17 | | vip_address | 172.24.3.76 | | vip_network_id | 4c241fc4-95eb-491a-affe-26c53a8805cd | | vip_port_id | 9978a598-cc34-47f7-ba28-49431d570fd1 | | vip_qos_policy_id | None | | vip_subnet_id | e999d323-bd0f-4469-974f-7f66d427e507 | | tags | Finance | | | Sales | +---------------------+--------------------------------------+
9.3. Filtering Load-balancing service objects by using tags
You can use the Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) to create lists of objects. For the objects that are tagged, you can create filtered lists: lists that include or exclude objects based on whether your objects contain one or more of the specified tags. Being able to filter load balancers, listeners, pools, members, health monitors, rules, and policies using tags can be a starting point in managing usage, allocation, and maintenance of your load-balancing service resources.
Prerequisites
-
The administrator has created a project for you and has provided you with a
clouds.yaml
file for you to access the cloud. The
python-openstackclient
package resides on your workstation.$ dnf list installed python-openstackclient
Procedure
Confirm that the system
OS_CLOUD
variable is set for your cloud:$ echo $OS_CLOUD my_cloud
Reset the variable if necessary:
$ export OS_CLOUD=my_other_cloud
As an alternative, you can specify the cloud name by adding the
--os-cloud <cloud_name>
option each time you run anopenstack
command.Filter the objects that you want to list by running the appropriate
loadbalancer list
command for the objects with one of the tag options:Table 9.1. Tag options for filtering objects In my list, I want to… Examples include objects that match all specified tags.
$ openstack loadbalancer list --tags Sales,Finance
$ openstack loadbalancer listener list --tags Sales,Finance
$ openstack loadbalancer l7pool list --tags Sales,Finance
$ openstack loadbalancer member list --tags Sales,Finance pool1
$ openstack loadbalancer healthmonitor list --tags Sales,Finance
$ openstack loadbalancer l7policy list --tags Sales,Finance
$ openstack loadbalancer l7rule list --tags Sales,Finance policy1
include objects that match one or more specified tags.
$ openstack loadbalancer list --any-tags Sales,Finance
$ openstack loadbalancer listener list --any-tags Sales,Finance
$ openstack loadbalancer l7pool list --any-tags Sales,Finance
$ openstack loadbalancer member list --any-tags Sales,Finance pool1
$ openstack loadbalancer healthmonitor list --any-tags Sales,Finance
$ openstack loadbalancer l7policy list --any-tags Sales,Finance
$ openstack loadbalancer l7rule list --any-tags Sales,Finance policy1
exclude objects that match all specified tags.
$ openstack loadbalancer list --not-tags Sales,Finance
$ openstack loadbalancer listener list --not-tags Sales,Finance
$ openstack loadbalancer l7pool list --not-tags Sales,Finance
$ openstack loadbalancer member list --not-tags Sales,Finance pool1
$ openstack loadbalancer healthmonitor list --not-tags Sales,Finance
$ openstack loadbalancer l7policy list --not-tags Sales,Finance
$ openstack loadbalancer l7rule list --not-tags Sales,Finance policy1
exclude objects that match one or more specified tags.
$ openstack loadbalancer list --not-any-tags Sales,Finance
$ openstack loadbalancer listener list --not-any-tags Sales,Finance
$ openstack loadbalancer l7pool list --not-any-tags Sales,Finance
$ openstack loadbalancer member list --not-any-tags Sales,Finance pool1
$ openstack loadbalancer healthmonitor list --not-any-tags Sales,Finance
$ openstack loadbalancer l7policy list --not-any-tags Sales,Finance
$ openstack loadbalancer l7rule list --not-any-tags Sales,Finance policy1
NoteWhen specifying more than one tag, separate the tags by using a comma.
Chapter 10. Troubleshooting and maintaining the Load-balancing service
This content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
Basic troubleshooting and maintenance for the Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) starts with being familiar with the OpenStack client commands for showing status and migrating instances, and knowing how to access logs. If you need to troubleshoot more in depth, you can SSH into one or more Load-balancing service instances (amphorae).
10.1. Verifying the load balancer
You can troubleshoot the Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia) and its various components by viewing the output of the load balancer show and list commands.
Prerequisites
-
The administrator has created a project for you and has provided you with a
clouds.yaml
file for you to access the cloud. The
python-openstackclient
package resides on your workstation.$ dnf list installed python-openstackclient
Procedure
Confirm that the system
OS_CLOUD
variable is set for your cloud:$ echo $OS_CLOUD my_cloud
Reset the variable if necessary:
$ export OS_CLOUD=my_other_cloud
As an alternative, you can specify the cloud name by adding the
--os-cloud <cloud_name>
option each time you run anopenstack
command.Verify the load balancer (
lb1
) settings.NoteValues inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site.
Example
$ openstack loadbalancer show lb1
Sample output
+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2024-02-17T15:59:18 | | description | | | flavor_id | None | | id | 265d0b71-c073-40f4-9718-8a182c6d53ca | | listeners | 5aaa67da-350d-4125-9022-238e0f7b7f6f | | name | lb1 | | operating_status | ONLINE | | pools | 48f6664c-b192-4763-846a-da568354da4a | | project_id | 52376c9c5c2e434283266ae7cacd3a9c | | provider | amphora | | provisioning_status | ACTIVE | | updated_at | 2024-02-17T16:01:21 | | vip_address | 192.0.2.177 | | vip_network_id | afeaf55e-7128-4dff-80e2-98f8d1f2f44c | | vip_port_id | 94a12275-1505-4cdc-80c9-4432767a980f | | vip_qos_policy_id | None | | vip_subnet_id | 06ffa90e-2b86-4fe3-9731-c7839b0be6de | +---------------------+--------------------------------------+
Using the loadbalancer ID (
265d0b71-c073-40f4-9718-8a182c6d53ca
) from the previous step, obtain the ID of the amphora associated with the load balancer (lb1
).Example
$ openstack loadbalancer amphora list | grep 265d0b71-c073-40f4-9718-8a182c6d53ca
Sample output
| 1afabefd-ba09-49e1-8c39-41770aa25070 | 265d0b71-c073-40f4-9718-8a182c6d53ca | ALLOCATED | STANDALONE | 198.51.100.7 | 192.0.2.177 |
Using the amphora ID (
1afabefd-ba09-49e1-8c39-41770aa25070
) from the previous step, view amphora information.Example
$ openstack loadbalancer amphora show 1afabefd-ba09-49e1-8c39-41770aa25070
Sample output
+-----------------+--------------------------------------+ | Field | Value | +-----------------+--------------------------------------+ | id | 1afabefd-ba09-49e1-8c39-41770aa25070 | | loadbalancer_id | 265d0b71-c073-40f4-9718-8a182c6d53ca | | compute_id | ba9fc1c4-8aee-47ad-b47f-98f12ea7b200 | | lb_network_ip | 198.51.100.7 | | vrrp_ip | 192.0.2.36 | | ha_ip | 192.0.2.177 | | vrrp_port_id | 07dcd894-487a-48dc-b0ec-7324fe5d2082 | | ha_port_id | 94a12275-1505-4cdc-80c9-4432767a980f | | cert_expiration | 2026-03-19T15:59:23 | | cert_busy | False | | role | STANDALONE | | status | ALLOCATED | | vrrp_interface | None | | vrrp_id | 1 | | vrrp_priority | None | | cached_zone | nova | | created_at | 2024-02-17T15:59:22 | | updated_at | 2024-02-17T16:00:50 | | image_id | 53001253-5005-4891-bb61-8784ae85e962 | | compute_flavor | 65 | +-----------------+--------------------------------------+
View the listener (
listener1
) details.Example
$ openstack loadbalancer listener show listener1
Sample output
+-----------------------------+--------------------------------------+ | Field | Value | +-----------------------------+--------------------------------------+ | admin_state_up | True | | connection_limit | -1 | | created_at | 2024-02-17T16:00:59 | | default_pool_id | 48f6664c-b192-4763-846a-da568354da4a | | default_tls_container_ref | None | | description | | | id | 5aaa67da-350d-4125-9022-238e0f7b7f6f | | insert_headers | None | | l7policies | | | loadbalancers | 265d0b71-c073-40f4-9718-8a182c6d53ca | | name | listener1 | | operating_status | ONLINE | | project_id | 52376c9c5c2e434283266ae7cacd3a9c | | protocol | HTTP | | protocol_port | 80 | | provisioning_status | ACTIVE | | sni_container_refs | [] | | timeout_client_data | 50000 | | timeout_member_connect | 5000 | | timeout_member_data | 50000 | | timeout_tcp_inspect | 0 | | updated_at | 2024-02-17T16:01:21 | | client_ca_tls_container_ref | None | | client_authentication | NONE | | client_crl_container_ref | None | | allowed_cidrs | None | +-----------------------------+--------------------------------------+
View the pool (
pool1
) and load-balancer members.Example
$ openstack loadbalancer pool show pool1
Sample output
+----------------------+--------------------------------------+ | Field | Value | +----------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2024-02-17T16:01:08 | | description | | | healthmonitor_id | 4b24180f-74c7-47d2-b0a2-4783ada9a4f0 | | id | 48f6664c-b192-4763-846a-da568354da4a | | lb_algorithm | ROUND_ROBIN | | listeners | 5aaa67da-350d-4125-9022-238e0f7b7f6f | | loadbalancers | 265d0b71-c073-40f4-9718-8a182c6d53ca | | members | b92694bd-3407-461a-92f2-90fb2c4aedd1 | | | 4ccdd1cf-736d-4b31-b67c-81d5f49e528d | | name | pool1 | | operating_status | ONLINE | | project_id | 52376c9c5c2e434283266ae7cacd3a9c | | protocol | HTTP | | provisioning_status | ACTIVE | | session_persistence | None | | updated_at | 2024-02-17T16:05:21 | | tls_container_ref | None | | ca_tls_container_ref | None | | crl_container_ref | None | | tls_enabled | False | +----------------------+--------------------------------------+
Verify HTTPS traffic flows across a load balancer whose listener is configured for
HTTPS
orTERMINATED_HTTPS
protocols by connecting to the VIP address (192.0.2.177
) of the load balancer.TipObtain the load-balancer VIP address by using the command,
openstack loadbalancer show <load_balancer_name>
.NoteSecurity groups implemented for the load balancer VIP only allow data traffic for the required protocols and ports. For this reason you cannot ping load balancer VIPs, because ICMP traffic is blocked.
Example
$ curl -v https://192.0.2.177 --insecure
Sample output
* About to connect() to 192.0.2.177 port 443 (#0) * Trying 192.0.2.177... * Connected to 192.0.2.177 (192.0.2.177) port 443 (#0) * Initializing NSS with certpath: sql:/etc/pki/nssdb * skipping SSL peer certificate verification * SSL connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 * Server certificate: * subject: CN=www.example.com,O=Dis,L=Springfield,ST=Denial,C=US * start date: Jan 15 09:21:45 2024 GMT * expire date: Jan 15 09:21:45 2027 GMT * common name: www.example.com * issuer: CN=www.example.com,O=Dis,L=Springfield,ST=Denial,C=US > GET / HTTP/1.1 > User-Agent: curl/7.29.0 > Host: 192.0.2.177 > Accept: */* > < HTTP/1.1 200 OK < Content-Length: 30 < * Connection #0 to host 192.0.2.177 left intact
10.2. Migrating a specific Load-balancing service instance
In some cases you must migrate a Load-balancing service instance (amphora). For example, if the host is being shut down for maintenance.
Prerequisites
-
You have the
oc
command line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-admin
privileges.
Procedure
Access the remote shell for the OpenStackClient pod from your workstation:
$ oc rsh -n openstack openstackclient
Locate the ID of the amphora that you want to migrate. You need to provide the ID in a later step.
$ openstack loadbalancer amphora list
To prevent the Compute scheduler service from scheduling any new amphorae to the Compute node being evacuated, disable the Compute node (
compute-host-1
).NoteValues inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site.
Example
$ openstack compute service set compute-host-1 nova-compute --disable
Fail over the amphora by using the amphora ID (
ea17210a-1076-48ff-8a1f-ced49ccb5e53
) that you obtained.Example
$ openstack loadbalancer amphora failover ea17210a-1076-48ff-8a1f-ced49ccb5e53
Exit the
openstackclient
pod:$ exit
10.3. Showing listener statistics
Using the OpenStack Client, you can obtain statistics about the listener for a particular Red Hat OpenStack Services on OpenShift (RHOSO) loadbalancer:
-
current active connections (
active_connections
). -
total bytes received (
bytes_in
). -
total bytes sent (
bytes_out
). -
total requests that were unable to be fulfilled (
request_errors
). -
total connections handled (
total_connections
).
Prerequisites
-
The administrator has created a project for you and has provided you with a
clouds.yaml
file for you to access the cloud. The
python-openstackclient
package resides on your workstation.$ dnf list installed python-openstackclient
Procedure
Confirm that the system
OS_CLOUD
variable is set for your cloud:$ echo $OS_CLOUD my_cloud
Reset the variable if necessary:
$ export OS_CLOUD=my_other_cloud
As an alternative, you can specify the cloud name by adding the
--os-cloud <cloud_name>
option each time you run anopenstack
command.View the stats for the listener (
listener1
).NoteValues inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site.
Example
$ openstack loadbalancer listener stats show listener1
TipIf you do not know the name of the listener, enter the command
loadbalancer listener list
.Sample output
+--------------------+-------+ | Field | Value | +--------------------+-------+ | active_connections | 0 | | bytes_in | 0 | | bytes_out | 0 | | request_errors | 0 | | total_connections | 0 | +--------------------+-------+
Additional resources
10.4. Interpreting listener request errors
You can obtain statistics about the listener for a particular Red Hat OpenStack Services on OpenShift (RHOSO) loadbalancer. For more information, see Section 10.3, “Showing listener statistics”.
One of the statistics tracked by the RHOSO loadbalancer, request_errors
, is only counting errors that occurred in the request from the end user connecting to the load balancer. The request_errors
variable is not measuring errors reported by the member server.
For example, if a tenant connects through the RHOSO Load-balancing service (octavia) to a web server that returns an HTTP status code of 400 (Bad Request)
, this error is not collected by the Load-balancing service. Loadbalancers do not inspect the content of data traffic. In this example, the loadbalancer interprets this flow as successful because it transported information between the user and the web server correctly.
The following conditions can cause the request_errors
variable to increment:
- early termination from the client, before the request has been sent.
- read error from the client.
- client timeout.
- client closed the connection.
- various bad requests from the client.
Additional resources
10.5. Viewing the load-balancing management network
After the octavia operator has finished deploying octavia, you can view details about the management network.
Procedure
Run the following OpenStack client command using the
oc rsh
command:$ oc rsh openstack client openstack network list -f yaml
Sample output
- ID: 2e4fc309-546b-4ac8-9eae-aa8d70a27a9b Name: octavia-provider-net Subnets: - eea45073-6e56-47fd-9153-12f7f49bc115 - ID: 77881d3f-04b0-46cb-931f-d54003cce9f0 Name: lb-mgmt-net Subnets: - e4ab96af-8077-4971-baa4-e0d40a16f55a
The octavia-provider-net
is the external provider network and uses the octavia
network attachment interface as the physical network. Linked to the octavia
network attachment. This network is limited to the OpenShift control plane. lb-mgmt-net
is a self-serve tenant network that the connects the Octavia amphora instances.
The amphora controllers do not have direct access to the lb-mgmt-net
network. It is accessed through the octavia
network attachment and a router that the octavia-operator manages. The subnets can be viewed by running oc rsh openstackclient subnet list -f yaml
:
- ID: e4ab96af-8077-4971-baa4-e0d40a16f55a Name: lb-mgmt-subnet Network: 77881d3f-04b0-46cb-931f-d54003cce9f0 Subnet: 172.24.0.0/16 - ID: eea45073-6e56-47fd-9153-12f7f49bc115 Name: octavia-provider-subnet Network: 2e4fc309-546b-4ac8-9eae-aa8d70a27a9b Subnet: 172.23.0.0/24
The subnet CIDR for octavia-provider-subnet
is taken from the octavia
network attachment and the Subnet CIDR of lb-mgmt-subnet
is taken from the dst
field of the octavia
network attachment routes.
The octavia-link-router
handles the routing between the octavia-provider-net
and lb-mgmt-net
networks. To view the routers run oc rsh openstackclient openstack router list -f yaml
:
- ID: 371d800c-c803-4210-836b-eb468654462a Name: octavia-link-router Project: dc65b54e9cba475ba0adba7f898060f2 State: true Status: ACTIVE
The details of the octavia-link-router
reveal how it is configured to treat the networks. These can be retrieved by running oc rsh openstackclient openstack router show -f yaml octavia-link-router
:
admin_state_up: true availability_zone_hints: [] availability_zones: [] created_at: '2024-06-11T17:20:57Z' description: '' enable_ndp_proxy: null external_gateway_info: enable_snat: false external_fixed_ips: - ip_address: 172.23.0.150 subnet_id: eea45073-6e56-47fd-9153-12f7f49bc115 network_id: 2e4fc309-546b-4ac8-9eae-aa8d70a27a9b flavor_id: null id: 371d800c-c803-4210-836b-eb468654462a interfaces_info: - ip_address: 172.24.1.89 port_id: 1a44e94d-f44a-4752-81db-bc5402857a08 subnet_id: e4ab96af-8077-4971-baa4-e0d40a16f55a name: octavia-link-router project_id: dc65b54e9cba475ba0adba7f898060f2 revision_number: 4 routes: [] status: ACTIVE tags: [] tenant_id: dc65b54e9cba475ba0adba7f898060f2 updated_at: '2024-06-11T17:21:01Z'
The external_gateway_info
of the router will correspond to the gw
field of the routes
provided in the network attachment. Also notice that source network address translation is disabled. This is important as the amphora controllers communicate with the amphora using the addresses on the lb-mgmt-net
that OpenStack allocates, not a floating IP. The routes
of the network attachment direct traffic from the amphora controllers to the router and the host routes on the lb-mgmt-net
subnet establish the reverse route. This host route will use the ip_address
of the port in interfaces_info
as the next_hop and the Subnet
of the octavia-provider-subnet
as the Destination
to be routed to.
To view the host routes for the lb-mgmt-subnet
, run oc rsh openstackclient openstack subnet show lb-mgmt-subnet -c host_routes -f yaml
host_routes: - destination: 172.23.0.0/24 nexthop: 172.24.1.89
The port used to connect lb-mgmt-subnet
to the router is named lb-mgmt-router-port
and the details can be viewed by running oc rsh openstackclient openstack port show lb-mgmt-router-port -f yaml
. Note that the port_id
in the router’s interface_info
can be used instead of the port name.
admin_state_up: true allowed_address_pairs: [] binding_host_id: '' binding_profile: {} binding_vif_details: {} binding_vif_type: unbound binding_vnic_type: normal created_at: '2024-06-11T17:20:41Z' data_plane_status: null description: '' device_id: 371d800c-c803-4210-836b-eb468654462a device_owner: network:router_interface device_profile: null dns_assignment: - fqdn: host-172-24-1-89.openstackgate.local. hostname: host-172-24-1-89 ip_address: 172.24.1.89 dns_domain: '' dns_name: '' extra_dhcp_opts: [] fixed_ips: - ip_address: 172.24.1.89 subnet_id: e4ab96af-8077-4971-baa4-e0d40a16f55a id: 1a44e94d-f44a-4752-81db-bc5402857a08 ip_allocation: immediate mac_address: fa:16:3e:ba:be:ee name: lb-mgmt-router-port network_id: 77881d3f-04b0-46cb-931f-d54003cce9f0 numa_affinity_policy: null port_security_enabled: true project_id: dc65b54e9cba475ba0adba7f898060f2 propagate_uplink_status: null qos_network_policy_id: null qos_policy_id: null resource_request: null revision_number: 3 security_group_ids: - 055686ce-fb2d-409b-ab74-85df9ab3a9e0 - 5c41444b-0863-4609-9335-d5a66bdbcad8 status: ACTIVE tags: [] trunk_details: null updated_at: '2024-06-11T17:21:03Z'
The fixed_ips
, device_id
and device_owner
are all of interest:
-
fixed_ips
will match the IP for theinterfaces_info
of theoctavia-link-router
-
device_id
will match the ID for theoctavia-link-router
-
device_owner
indicates that OpenStack is using the port as a router interface