This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.Este conteúdo não está disponível no idioma selecionado.
Chapter 18. Load balancing on RHOSP
18.1. Using the Octavia OVN load balancer provider driver with Kuryr SDN Copiar o linkLink copiado para a área de transferência!
If your OpenShift Container Platform cluster uses Kuryr and was installed on a Red Hat OpenStack Platform (RHOSP) 13 cloud that was later upgraded to RHOSP 16, you can configure it to use the Octavia OVN provider driver.
Kuryr replaces existing load balancers after you change provider drivers. This process results in some downtime.
Prerequisites
-
Install the RHOSP CLI,
openstack
. -
Install the OpenShift Container Platform CLI,
oc
. Verify that the Octavia OVN driver on RHOSP is enabled.
TipTo view a list of available Octavia drivers, on a command line, enter
openstack loadbalancer provider list
.The
ovn
driver is displayed in the command’s output.
Procedure
To change from the Octavia Amphora provider driver to Octavia OVN:
Open the
kuryr-config
ConfigMap. On a command line, enter:oc -n openshift-kuryr edit cm kuryr-config
$ oc -n openshift-kuryr edit cm kuryr-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the ConfigMap, delete the line that contains
kuryr-octavia-provider: default
. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Delete this line. The cluster will regenerate it with
ovn
as the value.
Wait for the Cluster Network Operator to detect the modification and to redeploy the
kuryr-controller
andkuryr-cni
pods. This process might take several minutes.Verify that the
kuryr-config
ConfigMap annotation is present withovn
as its value. On a command line, enter:oc -n openshift-kuryr edit cm kuryr-config
$ oc -n openshift-kuryr edit cm kuryr-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
ovn
provider value is displayed in the output:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that RHOSP recreated its load balancers.
On a command line, enter:
openstack loadbalancer list | grep amphora
$ openstack loadbalancer list | grep amphora
Copy to Clipboard Copied! Toggle word wrap Toggle overflow A single Amphora load balancer is displayed. For example:
a4db683b-2b7b-4988-a582-c39daaad7981 | ostest-7mbj6-kuryr-api-loadbalancer | 84c99c906edd475ba19478a9a6690efd | 172.30.0.1 | ACTIVE | amphora
a4db683b-2b7b-4988-a582-c39daaad7981 | ostest-7mbj6-kuryr-api-loadbalancer | 84c99c906edd475ba19478a9a6690efd | 172.30.0.1 | ACTIVE | amphora
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Search for
ovn
load balancers by entering:openstack loadbalancer list | grep ovn
$ openstack loadbalancer list | grep ovn
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The remaining load balancers of the
ovn
type are displayed. For example:2dffe783-98ae-4048-98d0-32aa684664cc | openshift-apiserver-operator/metrics | 84c99c906edd475ba19478a9a6690efd | 172.30.167.119 | ACTIVE | ovn 0b1b2193-251f-4243-af39-2f99b29d18c5 | openshift-etcd/etcd | 84c99c906edd475ba19478a9a6690efd | 172.30.143.226 | ACTIVE | ovn f05b07fc-01b7-4673-bd4d-adaa4391458e | openshift-dns-operator/metrics | 84c99c906edd475ba19478a9a6690efd | 172.30.152.27 | ACTIVE | ovn
2dffe783-98ae-4048-98d0-32aa684664cc | openshift-apiserver-operator/metrics | 84c99c906edd475ba19478a9a6690efd | 172.30.167.119 | ACTIVE | ovn 0b1b2193-251f-4243-af39-2f99b29d18c5 | openshift-etcd/etcd | 84c99c906edd475ba19478a9a6690efd | 172.30.143.226 | ACTIVE | ovn f05b07fc-01b7-4673-bd4d-adaa4391458e | openshift-dns-operator/metrics | 84c99c906edd475ba19478a9a6690efd | 172.30.152.27 | ACTIVE | ovn
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
18.2. Scaling clusters for application traffic by using Octavia Copiar o linkLink copiado para a área de transferência!
OpenShift Container Platform clusters that run on Red Hat OpenStack Platform (RHOSP) can use the Octavia load balancing service to distribute traffic across multiple virtual machines (VMs) or floating IP addresses. This feature mitigates the bottleneck that single machines or addresses create.
If your cluster uses Kuryr, the Cluster Network Operator created an internal Octavia load balancer at deployment. You can use this load balancer for application network scaling.
If your cluster does not use Kuryr, you must create your own Octavia load balancer to use it for application network scaling.
18.2.1. Scaling clusters by using Octavia Copiar o linkLink copiado para a área de transferência!
If you want to use multiple API load balancers, or if your cluster does not use Kuryr, create an Octavia load balancer and then configure your cluster to use it.
Prerequisites
- Octavia is available on your Red Hat OpenStack Platform (RHOSP) deployment.
Procedure
From a command line, create an Octavia load balancer that uses the Amphora driver:
openstack loadbalancer create --name API_OCP_CLUSTER --vip-subnet-id <id_of_worker_vms_subnet>
$ openstack loadbalancer create --name API_OCP_CLUSTER --vip-subnet-id <id_of_worker_vms_subnet>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can use a name of your choice instead of
API_OCP_CLUSTER
.After the load balancer becomes active, create listeners:
openstack loadbalancer listener create --name API_OCP_CLUSTER_6443 --protocol HTTPS--protocol-port 6443 API_OCP_CLUSTER
$ openstack loadbalancer listener create --name API_OCP_CLUSTER_6443 --protocol HTTPS--protocol-port 6443 API_OCP_CLUSTER
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteTo view the status of the load balancer, enter
openstack loadbalancer list
.Create a pool that uses the round robin algorithm and has session persistence enabled:
openstack loadbalancer pool create --name API_OCP_CLUSTER_pool_6443 --lb-algorithm ROUND_ROBIN --session-persistence type=<source_IP_address> --listener API_OCP_CLUSTER_6443 --protocol HTTPS
$ openstack loadbalancer pool create --name API_OCP_CLUSTER_pool_6443 --lb-algorithm ROUND_ROBIN --session-persistence type=<source_IP_address> --listener API_OCP_CLUSTER_6443 --protocol HTTPS
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To ensure that control plane machines are available, create a health monitor:
openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type TCP API_OCP_CLUSTER_pool_6443
$ openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type TCP API_OCP_CLUSTER_pool_6443
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the control plane machines as members of the load balancer pool:
for SERVER in $(MASTER-0-IP MASTER-1-IP MASTER-2-IP) do openstack loadbalancer member create --address $SERVER --protocol-port 6443 API_OCP_CLUSTER_pool_6443 done
$ for SERVER in $(MASTER-0-IP MASTER-1-IP MASTER-2-IP) do openstack loadbalancer member create --address $SERVER --protocol-port 6443 API_OCP_CLUSTER_pool_6443 done
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: To reuse the cluster API floating IP address, unset it:
openstack floating ip unset $API_FIP
$ openstack floating ip unset $API_FIP
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add either the unset
API_FIP
or a new address to the created load balancer VIP:openstack floating ip set --port $(openstack loadbalancer show -c <vip_port_id> -f value API_OCP_CLUSTER) $API_FIP
$ openstack floating ip set --port $(openstack loadbalancer show -c <vip_port_id> -f value API_OCP_CLUSTER) $API_FIP
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Your cluster now uses Octavia for load balancing.
If Kuryr uses the Octavia Amphora driver, all traffic is routed through a single Amphora virtual machine (VM).
You can repeat this procedure to create additional load balancers, which can alleviate the bottleneck.
18.2.2. Scaling clusters that use Kuryr by using Octavia Copiar o linkLink copiado para a área de transferência!
If your cluster uses Kuryr, associate the API floating IP address of your cluster with the pre-existing Octavia load balancer.
Prerequisites
- Your OpenShift Container Platform cluster uses Kuryr.
- Octavia is available on your Red Hat OpenStack Platform (RHOSP) deployment.
Procedure
Optional: From a command line, to reuse the cluster API floating IP address, unset it:
openstack floating ip unset $API_FIP
$ openstack floating ip unset $API_FIP
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add either the unset
API_FIP
or a new address to the created load balancer VIP:openstack floating ip set --port $(openstack loadbalancer show -c <vip_port_id> -f value ${OCP_CLUSTER}-kuryr-api-loadbalancer) $API_FIP
$ openstack floating ip set --port $(openstack loadbalancer show -c <vip_port_id> -f value ${OCP_CLUSTER}-kuryr-api-loadbalancer) $API_FIP
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Your cluster now uses Octavia for load balancing.
If Kuryr uses the Octavia Amphora driver, all traffic is routed through a single Amphora virtual machine (VM).
You can repeat this procedure to create additional load balancers, which can alleviate the bottleneck.
18.3. Scaling for ingress traffic by using RHOSP Octavia Copiar o linkLink copiado para a área de transferência!
You can use Octavia load balancers to scale Ingress controllers on clusters that use Kuryr.
Prerequisites
- Your OpenShift Container Platform cluster uses Kuryr.
- Octavia is available on your RHOSP deployment.
Procedure
To copy the current internal router service, on a command line, enter:
oc -n openshift-ingress get svc router-internal-default -o yaml > external_router.yaml
$ oc -n openshift-ingress get svc router-internal-default -o yaml > external_router.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the file
external_router.yaml
, change the values ofmetadata.name
andspec.type
toLoadBalancer
.Example router file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You can delete timestamps and other information that is irrelevant to load balancing.
From a command line, create a service from the
external_router.yaml
file:oc apply -f external_router.yaml
$ oc apply -f external_router.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the external IP address of the service is the same as the one that is associated with the load balancer:
On a command line, retrieve the external IP address of the service:
oc -n openshift-ingress get svc
$ oc -n openshift-ingress get svc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-external-default LoadBalancer 172.30.235.33 10.46.22.161 80:30112/TCP,443:32359/TCP,1936:30317/TCP 3m38s router-internal-default ClusterIP 172.30.115.123 <none> 80/TCP,443/TCP,1936/TCP 22h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-external-default LoadBalancer 172.30.235.33 10.46.22.161 80:30112/TCP,443:32359/TCP,1936:30317/TCP 3m38s router-internal-default ClusterIP 172.30.115.123 <none> 80/TCP,443/TCP,1936/TCP 22h
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the IP address of the load balancer:
openstack loadbalancer list | grep router-external
$ openstack loadbalancer list | grep router-external
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
| 21bf6afe-b498-4a16-a958-3229e83c002c | openshift-ingress/router-external-default | 66f3816acf1b431691b8d132cc9d793c | 172.30.235.33 | ACTIVE | octavia |
| 21bf6afe-b498-4a16-a958-3229e83c002c | openshift-ingress/router-external-default | 66f3816acf1b431691b8d132cc9d793c | 172.30.235.33 | ACTIVE | octavia |
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the addresses you retrieved in the previous steps are associated with each other in the floating IP list:
openstack floating ip list | grep 172.30.235.33
$ openstack floating ip list | grep 172.30.235.33
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
| e2f80e97-8266-4b69-8636-e58bacf1879e | 10.46.22.161 | 172.30.235.33 | 655e7122-806a-4e0a-a104-220c6e17bda6 | a565e55a-99e7-4d15-b4df-f9d7ee8c9deb | 66f3816acf1b431691b8d132cc9d793c |
| e2f80e97-8266-4b69-8636-e58bacf1879e | 10.46.22.161 | 172.30.235.33 | 655e7122-806a-4e0a-a104-220c6e17bda6 | a565e55a-99e7-4d15-b4df-f9d7ee8c9deb | 66f3816acf1b431691b8d132cc9d793c |
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You can now use the value of EXTERNAL-IP
as the new Ingress address.
If Kuryr uses the Octavia Amphora driver, all traffic is routed through a single Amphora virtual machine (VM).
You can repeat this procedure to create additional load balancers, which can alleviate the bottleneck.