Ingress and load balancing
Exposing services and managing external traffic in OpenShift Container Platform
Abstract
Chapter 1. Configuring Routes Copy linkLink copied to clipboard!
1.1. Route configuration Copy linkLink copied to clipboard!
1.1.1. Creating an HTTP-based route Copy linkLink copied to clipboard!
Create a route to host your application at a public URL. The route can either be secure or unsecured, depending on the network security configuration of your application. An HTTP-based route is an unsecured route that uses the basic HTTP routing protocol and exposes a service on an unsecured application port.
The following procedure describes how to create a simple HTTP-based route to a web application, using the hello-openshift
application as an example.
Prerequisites
-
You installed the OpenShift CLI (
oc
). - You are logged in as an administrator.
- You have a web application that exposes a port and a TCP endpoint listening for traffic on the port.
Procedure
Create a project called
hello-openshift
by running the following command:oc new-project hello-openshift
$ oc new-project hello-openshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a pod in the project by running the following command:
oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json
$ oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a service called
hello-openshift
by running the following command:oc expose pod/hello-openshift
$ oc expose pod/hello-openshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an unsecured route to the
hello-openshift
application by running the following command:oc expose svc hello-openshift
$ oc expose svc hello-openshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To verify that the
route
resource that you created, run the following command:oc get routes -o yaml <name of resource>
$ oc get routes -o yaml <name of resource>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- In this example, the route is named
hello-openshift
.
Sample YAML definition of the created unsecured route
- 1
- The
host
field is an alias DNS record that points to the service. This field can be any valid DNS name, such aswww.example.com
. The DNS name must follow DNS952 subdomain conventions. If not specified, a route name is automatically generated. - 2
- The
targetPort
field is the target port on pods that is selected by the service that this route points to.NoteTo display your default ingress domain, run the following command:
oc get ingresses.config/cluster -o jsonpath={.spec.domain}
$ oc get ingresses.config/cluster -o jsonpath={.spec.domain}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.1.2. Creating a route for Ingress Controller sharding Copy linkLink copied to clipboard!
A route allows you to host your application at a URL. Ingress Controller sharding helps balance incoming traffic load among a set of Ingress Controllers. It can also isolate traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another.
The following procedure describes how to create a route for Ingress Controller sharding, using the hello-openshift
application as an example.
Prerequisites
-
You installed the OpenShift CLI (
oc
). - You are logged in as a project administrator.
- You have a web application that exposes a port and an HTTP or TLS endpoint listening for traffic on the port.
- You have configured the Ingress Controller for sharding.
Procedure
Create a project called
hello-openshift
by running the following command:oc new-project hello-openshift
$ oc new-project hello-openshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a pod in the project by running the following command:
oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json
$ oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a service called
hello-openshift
by running the following command:oc expose pod/hello-openshift
$ oc expose pod/hello-openshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a route definition called
hello-openshift-route.yaml
:YAML definition of the created route for sharding
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Both the label key and its corresponding label value must match the ones specified in the Ingress Controller. In this example, the Ingress Controller has the label key and value
type: sharded
. - 2
- The route will be exposed using the value of the
subdomain
field. When you specify thesubdomain
field, you must leave the hostname unset. If you specify both thehost
andsubdomain
fields, then the route will use the value of thehost
field, and ignore thesubdomain
field.
Use
hello-openshift-route.yaml
to create a route to thehello-openshift
application by running the following command:oc -n hello-openshift create -f hello-openshift-route.yaml
$ oc -n hello-openshift create -f hello-openshift-route.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Get the status of the route with the following command:
oc -n hello-openshift get routes/hello-openshift-edge -o yaml
$ oc -n hello-openshift get routes/hello-openshift-edge -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The resulting
Route
resource should look similar to the following:Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The hostname the Ingress Controller, or router, uses to expose the route. The value of the
host
field is automatically determined by the Ingress Controller, and uses its domain. In this example, the domain of the Ingress Controller is<apps-sharded.basedomain.example.net>
. - 2
- The hostname of the Ingress Controller. If the hostname is not set, the route can use a subdomain instead. When you specify a subdomain, you automatically use the domain of the Ingress Controller that exposes the route. When a route is exposed by multiple Ingress Controllers, the route is hosted at multiple URLs.
- 3
- The name of the Ingress Controller. In this example, the Ingress Controller has the name
sharded
.
1.1.3. Configuring route timeouts Copy linkLink copied to clipboard!
You can configure the default timeouts for an existing route when you have services in need of a low timeout, which is required for Service Level Availability (SLA) purposes, or a high timeout, for cases with a slow back end.
If you configured a user-managed external load balancer in front of your OpenShift Container Platform cluster, ensure that the timeout value for the user-managed external load balancer is higher than the timeout value for the route. This configuration prevents network congestion issues over the network that your cluster uses.
Prerequisites
- You need a deployed Ingress Controller on a running cluster.
Procedure
Using the
oc annotate
command, add the timeout to the route:oc annotate route <route_name> \ --overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit>
$ oc annotate route <route_name> \ --overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Supported time units are microseconds (us), milliseconds (ms), seconds (s), minutes (m), hours (h), or days (d).
The following example sets a timeout of two seconds on a route named
myroute
:oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s
$ oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.1.4. HTTP Strict Transport Security Copy linkLink copied to clipboard!
HTTP Strict Transport Security (HSTS) policy is a security enhancement, which signals to the browser client that only HTTPS traffic is allowed on the route host. HSTS also optimizes web traffic by signaling HTTPS transport is required, without using HTTP redirects. HSTS is useful for speeding up interactions with websites.
When HSTS policy is enforced, HSTS adds a Strict Transport Security header to HTTP and HTTPS responses from the site. You can use the insecureEdgeTerminationPolicy
value in a route to redirect HTTP to HTTPS. When HSTS is enforced, the client changes all requests from the HTTP URL to HTTPS before the request is sent, eliminating the need for a redirect.
Cluster administrators can configure HSTS to do the following:
- Enable HSTS per-route
- Disable HSTS per-route
- Enforce HSTS per-domain, for a set of domains, or use namespace labels in combination with domains
HSTS works only with secure routes, either edge-terminated or re-encrypt. The configuration is ineffective on HTTP or passthrough routes.
1.1.4.1. Enabling HTTP Strict Transport Security per-route Copy linkLink copied to clipboard!
HTTP strict transport security (HSTS) is implemented in the HAProxy template and applied to edge and re-encrypt routes that have the haproxy.router.openshift.io/hsts_header
annotation.
Prerequisites
- You are logged in to the cluster with a user with administrator privileges for the project.
-
You installed the OpenShift CLI (
oc
).
Procedure
To enable HSTS on a route, add the
haproxy.router.openshift.io/hsts_header
value to the edge-terminated or re-encrypt route. You can use theoc annotate
tool to do this by running the following command. To properly run the command, ensure that the semicolon (;
) in thehaproxy.router.openshift.io/hsts_header
route annotation is also surrounded by double quotation marks (""
).Example
annotate
command that sets the maximum age to31536000
ms (approximetly 8.5 hours)oc annotate route <route_name> -n <namespace> --overwrite=true "haproxy.router.openshift.io/hsts_header=max-age=31536000;\ includeSubDomains;preload"
$ oc annotate route <route_name> -n <namespace> --overwrite=true "haproxy.router.openshift.io/hsts_header=max-age=31536000;\ includeSubDomains;preload"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example route configured with an annotation
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Required.
max-age
measures the length of time, in seconds, that the HSTS policy is in effect. If set to0
, it negates the policy. - 2
- Optional. When included,
includeSubDomains
tells the client that all subdomains of the host must have the same HSTS policy as the host. - 3
- Optional. When
max-age
is greater than 0, you can addpreload
inhaproxy.router.openshift.io/hsts_header
to allow external services to include this site in their HSTS preload lists. For example, sites such as Google can construct a list of sites that havepreload
set. Browsers can then use these lists to determine which sites they can communicate with over HTTPS, even before they have interacted with the site. Withoutpreload
set, browsers must have interacted with the site over HTTPS, at least once, to get the header.
1.1.4.2. Disabling HTTP Strict Transport Security per-route Copy linkLink copied to clipboard!
To disable HTTP strict transport security (HSTS) per-route, you can set the max-age
value in the route annotation to 0
.
Prerequisites
- You are logged in to the cluster with a user with administrator privileges for the project.
-
You installed the OpenShift CLI (
oc
).
Procedure
To disable HSTS, set the
max-age
value in the route annotation to0
, by entering the following command:oc annotate route <route_name> -n <namespace> --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=0"
$ oc annotate route <route_name> -n <namespace> --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=0"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipYou can alternatively apply the following YAML to create the config map:
Example of disabling HSTS per-route
metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=0
metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To disable HSTS for every route in a namespace, enter the following command:
oc annotate route --all -n <namespace> --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=0"
$ oc annotate route --all -n <namespace> --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=0"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To query the annotation for all routes, enter the following command:
oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{$a := index .metadata.annotations "haproxy.router.openshift.io/hsts_header"}}{{$n := .metadata.name}}{{with $a}}Name: {{$n}} HSTS: {{$a}}{{"\n"}}{{else}}{{""}}{{end}}{{end}}{{end}}'
$ oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{$a := index .metadata.annotations "haproxy.router.openshift.io/hsts_header"}}{{$n := .metadata.name}}{{with $a}}Name: {{$n}} HSTS: {{$a}}{{"\n"}}{{else}}{{""}}{{end}}{{end}}{{end}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Name: routename HSTS: max-age=0
Name: routename HSTS: max-age=0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.1.4.3. Enforcing HTTP Strict Transport Security per-domain Copy linkLink copied to clipboard!
To enforce HTTP Strict Transport Security (HSTS) per-domain for secure routes, add a requiredHSTSPolicies
record to the Ingress spec to capture the configuration of the HSTS policy.
If you configure a requiredHSTSPolicy
to enforce HSTS, then any newly created route must be configured with a compliant HSTS policy annotation.
To handle upgraded clusters with non-compliant HSTS routes, you can update the manifests at the source and apply the updates.
You cannot use oc expose route
or oc create route
commands to add a route in a domain that enforces HSTS, because the API for these commands does not accept annotations.
HSTS cannot be applied to insecure, or non-TLS routes, even if HSTS is requested for all routes globally.
Prerequisites
- You are logged in to the cluster with a user with administrator privileges for the project.
-
You installed the OpenShift CLI (
oc
).
Procedure
Edit the Ingress configuration YAML by running the following command and updating fields as needed:
oc edit ingresses.config.openshift.io/cluster
$ oc edit ingresses.config.openshift.io/cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example HSTS policy
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Required.
requiredHSTSPolicies
are validated in order, and the first matchingdomainPatterns
applies. - 2
- Required. You must specify at least one
domainPatterns
hostname. Any number of domains can be listed. You can include multiple sections of enforcing options for differentdomainPatterns
. - 3
- Optional. If you include
namespaceSelector
, it must match the labels of the project where the routes reside, to enforce the set HSTS policy on the routes. Routes that only match thenamespaceSelector
and not thedomainPatterns
are not validated. - 4
- Required.
max-age
measures the length of time, in seconds, that the HSTS policy is in effect. This policy setting allows for a smallest and largestmax-age
to be enforced.-
The
largestMaxAge
value must be between0
and2147483647
. It can be left unspecified, which means no upper limit is enforced. -
The
smallestMaxAge
value must be between0
and2147483647
. Enter0
to disable HSTS for troubleshooting, otherwise enter1
if you never want HSTS to be disabled. It can be left unspecified, which means no lower limit is enforced.
-
The
- 5
- Optional. Including
preload
inhaproxy.router.openshift.io/hsts_header
allows external services to include this site in their HSTS preload lists. Browsers can then use these lists to determine which sites they can communicate with over HTTPS, before they have interacted with the site. Withoutpreload
set, browsers need to interact at least once with the site to get the header.preload
can be set with one of the following:-
RequirePreload
:preload
is required by theRequiredHSTSPolicy
. -
RequireNoPreload
:preload
is forbidden by theRequiredHSTSPolicy
. -
NoOpinion
:preload
does not matter to theRequiredHSTSPolicy
.
-
- 6
- Optional.
includeSubDomainsPolicy
can be set with one of the following:-
RequireIncludeSubDomains
:includeSubDomains
is required by theRequiredHSTSPolicy
. -
RequireNoIncludeSubDomains
:includeSubDomains
is forbidden by theRequiredHSTSPolicy
. -
NoOpinion
:includeSubDomains
does not matter to theRequiredHSTSPolicy
.
-
You can apply HSTS to all routes in the cluster or in a particular namespace by entering the
oc annotate command
.To apply HSTS to all routes in the cluster, enter the
oc annotate command
. For example:oc annotate route --all --all-namespaces --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=31536000"
$ oc annotate route --all --all-namespaces --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=31536000"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To apply HSTS to all routes in a particular namespace, enter the
oc annotate command
. For example:oc annotate route --all -n my-namespace --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=31536000"
$ oc annotate route --all -n my-namespace --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=31536000"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
You can review the HSTS policy you configured. For example:
To review the
maxAge
set for required HSTS policies, enter the following command:oc get clusteroperator/ingress -n openshift-ingress-operator -o jsonpath='{range .spec.requiredHSTSPolicies[*]}{.spec.requiredHSTSPolicies.maxAgePolicy.largestMaxAge}{"\n"}{end}'
$ oc get clusteroperator/ingress -n openshift-ingress-operator -o jsonpath='{range .spec.requiredHSTSPolicies[*]}{.spec.requiredHSTSPolicies.maxAgePolicy.largestMaxAge}{"\n"}{end}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To review the HSTS annotations on all routes, enter the following command:
oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{$a := index .metadata.annotations "haproxy.router.openshift.io/hsts_header"}}{{$n := .metadata.name}}{{with $a}}Name: {{$n}} HSTS: {{$a}}{{"\n"}}{{else}}{{""}}{{end}}{{end}}{{end}}'
$ oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{$a := index .metadata.annotations "haproxy.router.openshift.io/hsts_header"}}{{$n := .metadata.name}}{{with $a}}Name: {{$n}} HSTS: {{$a}}{{"\n"}}{{else}}{{""}}{{end}}{{end}}{{end}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Name: <_routename_> HSTS: max-age=31536000;preload;includeSubDomains
Name: <_routename_> HSTS: max-age=31536000;preload;includeSubDomains
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.1.5. Throughput issue troubleshooting methods Copy linkLink copied to clipboard!
Sometimes applications deployed by using OpenShift Container Platform can cause network throughput issues, such as unusually high latency between specific services.
If pod logs do not reveal any cause of the problem, use the following methods to analyze performance issues:
Use a packet analyzer, such as
ping
ortcpdump
to analyze traffic between a pod and its node.For example, run the
tcpdump
tool on each pod while reproducing the behavior that led to the issue. Review the captures on both sides to compare send and receive timestamps to analyze the latency of traffic to and from a pod. Latency can occur in OpenShift Container Platform if a node interface is overloaded with traffic from other pods, storage devices, or the data plane.tcpdump -s 0 -i any -w /tmp/dump.pcap host <podip 1> && host <podip 2>
$ tcpdump -s 0 -i any -w /tmp/dump.pcap host <podip 1> && host <podip 2>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
podip
is the IP address for the pod. Run theoc get pod <pod_name> -o wide
command to get the IP address of a pod.
The
tcpdump
command generates a file at/tmp/dump.pcap
containing all traffic between these two pods. You can run the analyzer shortly before the issue is reproduced and stop the analyzer shortly after the issue is finished reproducing to minimize the size of the file. You can also run a packet analyzer between the nodes with:tcpdump -s 0 -i any -w /tmp/dump.pcap port 4789
$ tcpdump -s 0 -i any -w /tmp/dump.pcap port 4789
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use a bandwidth measuring tool, such as
iperf
, to measure streaming throughput and UDP throughput. Locate any bottlenecks by running the tool from the pods first, and then running it from the nodes.-
For information on installing and using
iperf
, see this Red Hat Solution.
-
For information on installing and using
- In some cases, the cluster might mark the node with the router pod as unhealthy due to latency issues. Use worker latency profiles to adjust the frequency that the cluster waits for a status update from the node before taking action.
-
If your cluster has designated lower-latency and higher-latency nodes, configure the
spec.nodePlacement
field in the Ingress Controller to control the placement of the router pod.
1.1.6. Using cookies to keep route statefulness Copy linkLink copied to clipboard!
OpenShift Container Platform provides sticky sessions, which enables stateful application traffic by ensuring all traffic hits the same endpoint. However, if the endpoint pod terminates, whether through restart, scaling, or a change in configuration, this statefulness can disappear.
OpenShift Container Platform can use cookies to configure session persistence. The ingress controller selects an endpoint to handle any user requests, and creates a cookie for the session. The cookie is passed back in the response to the request and the user sends the cookie back with the next request in the session. The cookie tells the ingress controller which endpoint is handling the session, ensuring that client requests use the cookie so that they are routed to the same pod.
Cookies cannot be set on passthrough routes, because the HTTP traffic cannot be seen. Instead, a number is calculated based on the source IP address, which determines the backend.
If backends change, the traffic can be directed to the wrong server, making it less sticky. If you are using a load balancer, which hides source IP, the same number is set for all connections and traffic is sent to the same pod.
1.1.6.1. Annotating a route with a cookie Copy linkLink copied to clipboard!
You can set a cookie name to overwrite the default, auto-generated one for the route. This allows the application receiving route traffic to know the cookie name. Deleting the cookie can force the next request to re-choose an endpoint. The result is that if a server is overloaded, that server tries to remove the requests from the client and redistribute them.
Procedure
Annotate the route with the specified cookie name:
oc annotate route <route_name> router.openshift.io/cookie_name="<cookie_name>"
$ oc annotate route <route_name> router.openshift.io/cookie_name="<cookie_name>"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<route_name>
- Specifies the name of the route.
<cookie_name>
- Specifies the name for the cookie.
For example, to annotate the route
my_route
with the cookie namemy_cookie
:oc annotate route my_route router.openshift.io/cookie_name="my_cookie"
$ oc annotate route my_route router.openshift.io/cookie_name="my_cookie"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Capture the route hostname in a variable:
ROUTE_NAME=$(oc get route <route_name> -o jsonpath='{.spec.host}')
$ ROUTE_NAME=$(oc get route <route_name> -o jsonpath='{.spec.host}')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<route_name>
- Specifies the name of the route.
Save the cookie, and then access the route:
curl $ROUTE_NAME -k -c /tmp/cookie_jar
$ curl $ROUTE_NAME -k -c /tmp/cookie_jar
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the cookie saved by the previous command when connecting to the route:
curl $ROUTE_NAME -k -b /tmp/cookie_jar
$ curl $ROUTE_NAME -k -b /tmp/cookie_jar
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.1.7. Path-based routes Copy linkLink copied to clipboard!
Path-based routes specify a path component that can be compared against a URL, which requires that the traffic for the route be HTTP based. Thus, multiple routes can be served using the same hostname, each with a different path. Routers should match routes based on the most specific path to the least.
The following table shows example routes and their accessibility:
Route | When Compared to | Accessible |
---|---|---|
www.example.com/test | www.example.com/test | Yes |
www.example.com | No | |
www.example.com/test and www.example.com | www.example.com/test | Yes |
www.example.com | Yes | |
www.example.com | www.example.com/text | Yes (Matched by the host, not the route) |
www.example.com | Yes |
An unsecured route with a path
- 1
- The path is the only added attribute for a path-based route.
Path-based routing is not available when using passthrough TLS, as the router does not terminate TLS in that case and cannot read the contents of the request.
1.1.8. HTTP header configuration Copy linkLink copied to clipboard!
OpenShift Container Platform provides different methods for working with HTTP headers. When setting or deleting headers, you can use specific fields in the Ingress Controller or an individual route to modify request and response headers. You can also set certain headers by using route annotations. The various ways of configuring headers can present challenges when working together.
You can only set or delete headers within an IngressController
or Route
CR, you cannot append them. If an HTTP header is set with a value, that value must be complete and not require appending in the future. In situations where it makes sense to append a header, such as the X-Forwarded-For header, use the spec.httpHeaders.forwardedHeaderPolicy
field, instead of spec.httpHeaders.actions
.
1.1.8.1. Order of precedence Copy linkLink copied to clipboard!
When the same HTTP header is modified both in the Ingress Controller and in a route, HAProxy prioritizes the actions in certain ways depending on whether it is a request or response header.
- For HTTP response headers, actions specified in the Ingress Controller are executed after the actions specified in a route. This means that the actions specified in the Ingress Controller take precedence.
- For HTTP request headers, actions specified in a route are executed after the actions specified in the Ingress Controller. This means that the actions specified in the route take precedence.
For example, a cluster administrator sets the X-Frame-Options response header with the value DENY
in the Ingress Controller using the following configuration:
Example IngressController
spec
A route owner sets the same response header that the cluster administrator set in the Ingress Controller, but with the value SAMEORIGIN
using the following configuration:
Example Route
spec
When both the IngressController
spec and Route
spec are configuring the X-Frame-Options response header, then the value set for this header at the global level in the Ingress Controller takes precedence, even if a specific route allows frames. For a request header, the Route
spec value overrides the IngressController
spec value.
This prioritization occurs because the haproxy.config
file uses the following logic, where the Ingress Controller is considered the front end and individual routes are considered the back end. The header value DENY
applied to the front end configurations overrides the same header with the value SAMEORIGIN
that is set in the back end:
Additionally, any actions defined in either the Ingress Controller or a route override values set using route annotations.
1.1.8.2. Special case headers Copy linkLink copied to clipboard!
The following headers are either prevented entirely from being set or deleted, or allowed under specific circumstances:
Header name | Configurable using IngressController spec | Configurable using Route spec | Reason for disallowment | Configurable using another method |
---|---|---|---|---|
| No | No |
The | No |
| No | Yes |
When the | No |
| No | No |
The |
Yes: the |
| No | No | The cookies that HAProxy sets are used for session tracking to map client connections to particular back-end servers. Allowing these headers to be set could interfere with HAProxy’s session affinity and restrict HAProxy’s ownership of a cookie. | Yes:
|
1.1.9. Setting or deleting HTTP request and response headers in a route Copy linkLink copied to clipboard!
You can set or delete certain HTTP request and response headers for compliance purposes or other reasons. You can set or delete these headers either for all routes served by an Ingress Controller or for specific routes.
For example, you might want to enable a web application to serve content in alternate locations for specific routes if that content is written in multiple languages, even if there is a default global location specified by the Ingress Controller serving the routes.
The following procedure creates a route that sets the Content-Location HTTP request header so that the URL associated with the application, https://app.example.com
, directs to the location https://app.example.com/lang/en-us
. Directing application traffic to this location means that anyone using that specific route is accessing web content written in American English.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). - You are logged into an OpenShift Container Platform cluster as a project administrator.
- You have a web application that exposes a port and an HTTP or TLS endpoint listening for traffic on the port.
Procedure
Create a route definition and save it in a file called
app-example-route.yaml
:YAML definition of the created route with HTTP header directives
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The list of actions you want to perform on the HTTP headers.
- 2
- The type of header you want to change. In this case, a response header.
- 3
- The name of the header you want to change. For a list of available headers you can set or delete, see HTTP header configuration.
- 4
- The type of action being taken on the header. This field can have the value
Set
orDelete
. - 5
- When setting HTTP headers, you must provide a
value
. The value can be a string from a list of available directives for that header, for exampleDENY
, or it can be a dynamic value that will be interpreted using HAProxy’s dynamic value syntax. In this case, the value is set to the relative location of the content.
Create a route to your existing web application using the newly created route definition:
oc -n app-example create -f app-example-route.yaml
$ oc -n app-example create -f app-example-route.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For HTTP request headers, the actions specified in the route definitions are executed after any actions performed on HTTP request headers in the Ingress Controller. This means that any values set for those request headers in a route will take precedence over the ones set in the Ingress Controller. For more information on the processing order of HTTP headers, see HTTP header configuration.
1.1.10. Route-specific annotations Copy linkLink copied to clipboard!
The Ingress Controller can set the default options for all the routes it exposes. An individual route can override some of these defaults by providing specific configurations in its annotations. Red Hat does not support adding a route annotation to an operator-managed route.
To create an allow list with multiple source IPs or subnets, use a space-delimited list. Any other delimiter type causes the list to be ignored without a warning or error message.
Variable | Description | Environment variable used as default |
---|---|---|
|
Sets the load-balancing algorithm. Available options are |
|
|
Disables the use of cookies to track related connections. If set to | |
| Specifies an optional cookie to use for this route. The name must consist of any combination of upper and lower case letters, digits, "_", and "-". The default is the hashed internal key name for the route. | |
|
Sets the maximum number of connections that are allowed to a backing pod from a router. | |
|
Setting | |
|
Limits the number of concurrent TCP connections made through the same source IP address. It accepts a numeric value. | |
|
Limits the rate at which a client with the same source IP address can make HTTP requests. It accepts a numeric value. | |
|
Limits the rate at which a client with the same source IP address can make TCP connections. It accepts a numeric value. | |
| Sets a server-side timeout for the route. (TimeUnits) |
|
| This timeout applies to a tunnel connection, for example, WebSocket over cleartext, edge, reencrypt, or passthrough routes. With cleartext, edge, or reencrypt route types, this annotation is applied as a timeout tunnel with the existing timeout value. For the passthrough route types, the annotation takes precedence over any existing timeout value set. |
|
|
You can set either an IngressController or the ingress config . This annotation redeploys the router and configures the HA proxy to emit the haproxy |
|
| Sets the interval for the back-end health checks. (TimeUnits) |
|
| Sets an allowlist for the route. The allowlist is a space-separated list of IP addresses and CIDR ranges for the approved source addresses. Requests from IP addresses that are not in the allowlist are dropped.
The maximum number of IP addresses and CIDR ranges directly visible in the | |
| Sets a Strict-Transport-Security header for the edge terminated or re-encrypt route. | |
| Sets the rewrite path of the request on the backend. | |
| Sets a value to restrict cookies. The values are:
This value is applicable to re-encrypt and edge routes only. For more information, see the SameSite cookies documentation. | |
|
Sets the policy for handling the
|
|
-
By default, the router reloads every 5 s which resets the balancing connection across pods from the beginning. As a result, the
roundrobin
state is not preserved across reloads. This algorithm works best when pods have nearly identical computing capabilites and storage capacity. If your application or service has continuously changing endpoints, for example, due to the use of a CI/CD pipeline, uneven balancing can result. In this case, use a different algorithm. If the number of IP addresses and CIDR ranges in an allowlist exceeds 61, they are written into a separate file that is then referenced from the
haproxy.config
file. This file is stored in the/var/lib/haproxy/router/allowlists
folder.NoteTo ensure that the addresses are written to the allowlist, check that the full list of CIDR ranges are listed in the Ingress Controller configuration file. The etcd object size limit restricts how large a route annotation can be. Because of this, it creates a threshold for the maximum number of IP addresses and CIDR ranges that you can include in an allowlist.
Environment variables cannot be edited.
Router timeout variables
TimeUnits
are represented by a number followed by the unit: us
*(microseconds), ms
(milliseconds, default), s
(seconds), m
(minutes), h
*(hours), d
(days).
The regular expression is: [1-9][0-9]*(us
\|ms
\|s
\|m
\|h
\|d
).
Variable | Default | Description |
---|---|---|
|
| Length of time between subsequent liveness checks on back ends. |
|
| Controls the TCP FIN timeout period for the client connecting to the route. If the FIN sent to close the connection does not answer within the given time, HAProxy closes the connection. This is harmless if set to a low value and uses fewer resources on the router. |
|
| Length of time that a client has to acknowledge or send data. |
|
| The maximum connection time. |
|
| Controls the TCP FIN timeout from the router to the pod backing the route. |
|
| Length of time that a server has to acknowledge or send data. |
|
| Length of time for TCP or WebSocket connections to remain open. This timeout period resets whenever HAProxy reloads. |
|
|
Set the maximum time to wait for a new HTTP request to appear. If this is set too low, it can cause problems with browsers and applications not expecting a small
Some effective timeout values can be the sum of certain variables, rather than the specific expected timeout. For example, |
|
| Length of time the transmission of an HTTP request can take. |
|
| Allows the minimum frequency for the router to reload and accept new changes. |
|
| Timeout for the gathering of HAProxy metrics. |
A route setting custom timeout
- 1
- Specifies the new timeout with HAProxy supported units (
us
,ms
,s
,m
,h
,d
). If the unit is not provided,ms
is the default.
Setting a server-side timeout value for passthrough routes too low can cause WebSocket connections to timeout frequently on that route.
A route that allows only one specific IP address
metadata: annotations: haproxy.router.openshift.io/ip_allowlist: 192.168.1.10
metadata:
annotations:
haproxy.router.openshift.io/ip_allowlist: 192.168.1.10
A route that allows several IP addresses
metadata: annotations: haproxy.router.openshift.io/ip_allowlist: 192.168.1.10 192.168.1.11 192.168.1.12
metadata:
annotations:
haproxy.router.openshift.io/ip_allowlist: 192.168.1.10 192.168.1.11 192.168.1.12
A route that allows an IP address CIDR network
metadata: annotations: haproxy.router.openshift.io/ip_allowlist: 192.168.1.0/24
metadata:
annotations:
haproxy.router.openshift.io/ip_allowlist: 192.168.1.0/24
A route that allows both IP an address and IP address CIDR networks
metadata: annotations: haproxy.router.openshift.io/ip_allowlist: 180.5.61.153 192.168.1.0/24 10.0.0.0/8
metadata:
annotations:
haproxy.router.openshift.io/ip_allowlist: 180.5.61.153 192.168.1.0/24 10.0.0.0/8
A route specifying a rewrite target
- 1
- Sets
/
as rewrite path of the request on the backend.
Setting the haproxy.router.openshift.io/rewrite-target
annotation on a route specifies that the Ingress Controller should rewrite paths in HTTP requests using this route before forwarding the requests to the backend application. The part of the request path that matches the path specified in spec.path
is replaced with the rewrite target specified in the annotation.
The following table provides examples of the path rewriting behavior for various combinations of spec.path
, request path, and rewrite target.
Route.spec.path | Request path | Rewrite target | Forwarded request path |
---|---|---|---|
/foo | /foo | / | / |
/foo | /foo/ | / | / |
/foo | /foo/bar | / | /bar |
/foo | /foo/bar/ | / | /bar/ |
/foo | /foo | /bar | /bar |
/foo | /foo/ | /bar | /bar/ |
/foo | /foo/bar | /baz | /baz/bar |
/foo | /foo/bar/ | /baz | /baz/bar/ |
/foo/ | /foo | / | N/A (request path does not match route path) |
/foo/ | /foo/ | / | / |
/foo/ | /foo/bar | / | /bar |
Certain special characters in haproxy.router.openshift.io/rewrite-target
require special handling because they must be escaped properly. Refer to the following table to understand how these characters are handled.
For character | Use characters | Notes |
---|---|---|
# | \# | Avoid # because it terminates the rewrite expression |
% | % or %% | Avoid odd sequences such as %%% |
‘ | \’ | Avoid ‘ because it is ignored |
All other valid URL characters can be used without escaping.
1.1.11. Configuring the route admission policy Copy linkLink copied to clipboard!
Administrators and application developers can run applications in multiple namespaces with the same domain name. This is for organizations where multiple teams develop microservices that are exposed on the same hostname.
Allowing claims across namespaces should only be enabled for clusters with trust between namespaces, otherwise a malicious user could take over a hostname. For this reason, the default admission policy disallows hostname claims across namespaces.
Prerequisites
- Cluster administrator privileges.
Procedure
Edit the
.spec.routeAdmission
field of theingresscontroller
resource variable using the following command:oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{"spec":{"routeAdmission":{"namespaceOwnership":"InterNamespaceAllowed"}}}' --type=merge
$ oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{"spec":{"routeAdmission":{"namespaceOwnership":"InterNamespaceAllowed"}}}' --type=merge
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Sample Ingress Controller configuration
spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed ...
spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipYou can alternatively apply the following YAML to configure the route admission policy:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.1.12. Creating a route through an Ingress object Copy linkLink copied to clipboard!
Some ecosystem components have an integration with Ingress resources but not with route resources. To cover this case, OpenShift Container Platform automatically creates managed route objects when an Ingress object is created. These route objects are deleted when the corresponding Ingress objects are deleted.
Procedure
Define an Ingress object in the OpenShift Container Platform console or by entering the
oc create
command:YAML Definition of an Ingress
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
route.openshift.io/termination
annotation can be used to configure thespec.tls.termination
field of theRoute
asIngress
has no field for this. The accepted values areedge
,passthrough
andreencrypt
. All other values are silently ignored. When the annotation value is unset,edge
is the default route. The TLS certificate details must be defined in the template file to implement the default edge route. - 3
- When working with an
Ingress
object, you must specify an explicit hostname, unlike when working with routes. You can use the<host_name>.<cluster_ingress_domain>
syntax, for exampleapps.openshiftdemos.com
, to take advantage of the*.<cluster_ingress_domain>
wildcard DNS record and serving certificate for the cluster. Otherwise, you must ensure that there is a DNS record for the chosen hostname.If you specify the
passthrough
value in theroute.openshift.io/termination
annotation, setpath
to''
andpathType
toImplementationSpecific
in the spec:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc apply -f ingress.yaml
$ oc apply -f ingress.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- 2
- The
route.openshift.io/destination-ca-certificate-secret
can be used on an Ingress object to define a route with a custom destination certificate (CA). The annotation references a kubernetes secret,secret-ca-cert
that will be inserted into the generated route.-
To specify a route object with a destination CA from an ingress object, you must create a
kubernetes.io/tls
orOpaque
type secret with a certificate in PEM-encoded format in thedata.tls.crt
specifier of the secret.
-
To specify a route object with a destination CA from an ingress object, you must create a
List your routes:
oc get routes
$ oc get routes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The result includes an autogenerated route whose name starts with
frontend-
:NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD frontend-gnztq www.example.com frontend 443 reencrypt/Redirect None
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD frontend-gnztq www.example.com frontend 443 reencrypt/Redirect None
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you inspect this route, it looks this:
YAML Definition of an autogenerated route
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.1.13. Creating a route using the default certificate through an Ingress object Copy linkLink copied to clipboard!
If you create an Ingress object without specifying any TLS configuration, OpenShift Container Platform generates an insecure route. To create an Ingress object that generates a secure, edge-terminated route using the default ingress certificate, you can specify an empty TLS configuration as follows.
Prerequisites
- You have a service that you want to expose.
-
You have access to the OpenShift CLI (
oc
).
Procedure
Create a YAML file for the Ingress object. In this example, the file is called
example-ingress.yaml
:YAML definition of an Ingress object
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Use this exact syntax to specify TLS without specifying a custom certificate.
Create the Ingress object by running the following command:
oc create -f example-ingress.yaml
$ oc create -f example-ingress.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that OpenShift Container Platform has created the expected route for the Ingress object by running the following command:
oc get routes -o yaml
$ oc get routes -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.1.14. Creating a route using the destination CA certificate in the Ingress annotation Copy linkLink copied to clipboard!
The route.openshift.io/destination-ca-certificate-secret
annotation can be used on an Ingress object to define a route with a custom destination CA certificate.
Prerequisites
- You may have a certificate/key pair in PEM-encoded files, where the certificate is valid for the route host.
- You may have a separate CA certificate in a PEM-encoded file that completes the certificate chain.
- You must have a separate destination CA certificate in a PEM-encoded file.
- You must have a service that you want to expose.
Procedure
Create a secret for the destination CA certificate by entering the following command:
oc create secret generic dest-ca-cert --from-file=tls.crt=<file_path>
$ oc create secret generic dest-ca-cert --from-file=tls.crt=<file_path>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc -n test-ns create secret generic dest-ca-cert --from-file=tls.crt=tls.crt
$ oc -n test-ns create secret generic dest-ca-cert --from-file=tls.crt=tls.crt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
secret/dest-ca-cert created
secret/dest-ca-cert created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
route.openshift.io/destination-ca-certificate-secret
to the Ingress annotations:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The annotation references a kubernetes secret.
The secret referenced in this annotation will be inserted into the generated route.
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.1.15. Configuring the OpenShift Container Platform Ingress Controller for dual-stack networking Copy linkLink copied to clipboard!
If your OpenShift Container Platform cluster is configured for IPv4 and IPv6 dual-stack networking, your cluster is externally reachable by OpenShift Container Platform routes.
The Ingress Controller automatically serves services that have both IPv4 and IPv6 endpoints, but you can configure the Ingress Controller for single-stack or dual-stack services.
Prerequisites
- You deployed an OpenShift Container Platform cluster on bare metal.
-
You installed the OpenShift CLI (
oc
).
Procedure
To have the Ingress Controller serve traffic over IPv4/IPv6 to a workload, you can create a service YAML file or modify an existing service YAML file by setting the
ipFamilies
andipFamilyPolicy
fields. For example:Sample service YAML file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow These resources generate corresponding
endpoints
. The Ingress Controller now watchesendpointslices
.To view
endpoints
, enter the following command:oc get endpoints
$ oc get endpoints
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To view
endpointslices
, enter the following command:oc get endpointslices
$ oc get endpointslices
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.2. Secured routes Copy linkLink copied to clipboard!
Secure routes provide the ability to use several types of TLS termination to serve certificates to the client. The following sections describe how to create re-encrypt, edge, and passthrough routes with custom certificates.
If you create routes in Microsoft Azure through public endpoints, the resource names are subject to restriction. You cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation.
1.2.1. Creating a re-encrypt route with a custom certificate Copy linkLink copied to clipboard!
You can configure a secure route using re-encrypt TLS termination with a custom certificate by using the oc create route
command.
This procedure creates a Route
resource with a custom certificate and reencrypt TLS termination. The following assumes that the certificate/key pair are in the tls.crt
and tls.key
files in the current working directory. You must also specify a destination CA certificate to enable the Ingress Controller to trust the service’s certificate. You may also specify a CA certificate if needed to complete the certificate chain. Substitute the actual path names for tls.crt
, tls.key
, cacert.crt
, and (optionally) ca.crt
. Substitute the name of the Service
resource that you want to expose for frontend
. Substitute the appropriate hostname for www.example.com
.
Prerequisites
- You must have a certificate/key pair in PEM-encoded files, where the certificate is valid for the route host.
- You may have a separate CA certificate in a PEM-encoded file that completes the certificate chain.
- You must have a separate destination CA certificate in a PEM-encoded file.
- You must have a service that you want to expose.
Password protected key files are not supported. To remove a passphrase from a key file, use the following command:
openssl rsa -in password_protected_tls.key -out tls.key
$ openssl rsa -in password_protected_tls.key -out tls.key
Procedure
Create a secure
Route
resource using reencrypt TLS termination and a custom certificate:oc create route reencrypt --service=frontend --cert=tls.crt --key=tls.key --dest-ca-cert=destca.crt --ca-cert=ca.crt --hostname=www.example.com
$ oc create route reencrypt --service=frontend --cert=tls.crt --key=tls.key --dest-ca-cert=destca.crt --ca-cert=ca.crt --hostname=www.example.com
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you examine the resulting
Route
resource, it should look similar to the following:YAML Definition of the Secure Route
Copy to Clipboard Copied! Toggle word wrap Toggle overflow See
oc create route reencrypt --help
for more options.
1.2.2. Creating an edge route with a custom certificate Copy linkLink copied to clipboard!
You can configure a secure route using edge TLS termination with a custom certificate by using the oc create route
command. With an edge route, the Ingress Controller terminates TLS encryption before forwarding traffic to the destination pod. The route specifies the TLS certificate and key that the Ingress Controller uses for the route.
This procedure creates a Route
resource with a custom certificate and edge TLS termination. The following assumes that the certificate/key pair are in the tls.crt
and tls.key
files in the current working directory. You may also specify a CA certificate if needed to complete the certificate chain. Substitute the actual path names for tls.crt
, tls.key
, and (optionally) ca.crt
. Substitute the name of the service that you want to expose for frontend
. Substitute the appropriate hostname for www.example.com
.
Prerequisites
- You must have a certificate/key pair in PEM-encoded files, where the certificate is valid for the route host.
- You may have a separate CA certificate in a PEM-encoded file that completes the certificate chain.
- You must have a service that you want to expose.
Password protected key files are not supported. To remove a passphrase from a key file, use the following command:
openssl rsa -in password_protected_tls.key -out tls.key
$ openssl rsa -in password_protected_tls.key -out tls.key
Procedure
Create a secure
Route
resource using edge TLS termination and a custom certificate.oc create route edge --service=frontend --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=www.example.com
$ oc create route edge --service=frontend --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=www.example.com
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you examine the resulting
Route
resource, it should look similar to the following:YAML Definition of the Secure Route
Copy to Clipboard Copied! Toggle word wrap Toggle overflow See
oc create route edge --help
for more options.
1.2.3. Creating a passthrough route Copy linkLink copied to clipboard!
You can configure a secure route using passthrough termination by using the oc create route
command. With passthrough termination, encrypted traffic is sent straight to the destination without the router providing TLS termination. Therefore no key or certificate is required on the route.
Prerequisites
- You must have a service that you want to expose.
Procedure
Create a
Route
resource:oc create route passthrough route-passthrough-secured --service=frontend --port=8080
$ oc create route passthrough route-passthrough-secured --service=frontend --port=8080
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you examine the resulting
Route
resource, it should look similar to the following:A Secured Route Using Passthrough Termination
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The destination pod is responsible for serving certificates for the traffic at the endpoint. This is currently the only method that can support requiring client certificates, also known as two-way authentication.
1.2.4. Creating a route with externally managed certificates Copy linkLink copied to clipboard!
You can configure OpenShift Container Platform routes with third-party certificate management solutions by using the .spec.tls.externalCertificate
field of the route API. You can reference externally managed TLS certificates via secrets, eliminating the need for manual certificate management. Using the externally managed certificate reduces errors ensuring a smoother rollout of certificate updates, enabling the OpenShift router to serve renewed certificates promptly.
You can use externally managed certificates with both edge routes and re-encrypt routes.
Prerequisites
-
You must enable the
RouteExternalCertificate
feature gate. -
You have
create
permission on theroutes/custom-host
sub-resource, which is used for both creating and updating routes. -
You must have a secret containing a valid certificate/key pair in PEM-encoded format of type
kubernetes.io/tls
, which includes bothtls.key
andtls.crt
keys. - You must place the referenced secret in the same namespace as the route you want to secure.
Procedure
Create a
role
in the same namespace as the secret to allow the router service account read access by running the following command:oc create role secret-reader --verb=get,list,watch --resource=secrets --resource-name=<secret-name> \ --namespace=<current-namespace>
$ oc create role secret-reader --verb=get,list,watch --resource=secrets --resource-name=<secret-name> \
1 --namespace=<current-namespace>
2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
rolebinding
in the same namespace as the secret and bind the router service account to the newly created role by running the following command:oc create rolebinding secret-reader-binding --role=secret-reader --serviceaccount=openshift-ingress:router --namespace=<current-namespace>
$ oc create rolebinding secret-reader-binding --role=secret-reader --serviceaccount=openshift-ingress:router --namespace=<current-namespace>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the namespace where both your secret and route reside.
Create a YAML file that defines the
route
and specifies the secret containing your certificate using the following example.YAML definition of the secure route
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the actual name of your secret.
Create a
route
resource by running the following command:oc apply -f <route.yaml>
$ oc apply -f <route.yaml>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the generated YAML filename.
If the secret exists and has a certificate/key pair, the router will serve the generated certificate if all prerequisites are met.
NoteIf
.spec.tls.externalCertificate
is not provided, the router will use default generated certificates.You cannot provide the
.spec.tls.certificate
field or the.spec.tls.key
field when using the.spec.tls.externalCertificate
field.
Chapter 2. Configuring ingress cluster traffic Copy linkLink copied to clipboard!
2.1. Configuring ingress cluster traffic overview Copy linkLink copied to clipboard!
OpenShift Container Platform provides the following methods for communicating from outside the cluster with services running in the cluster.
The methods are recommended, in order or preference:
- If you have HTTP/HTTPS, use an Ingress Controller.
- If you have a TLS-encrypted protocol other than HTTPS. For example, for TLS with the SNI header, use an Ingress Controller.
-
Otherwise, use a Load Balancer, an External IP, or a
NodePort
.
Method | Purpose |
---|---|
Allows access to HTTP/HTTPS traffic and TLS-encrypted protocols other than HTTPS (for example, TLS with the SNI header). | |
Automatically assign an external IP using a load balancer service | Allows traffic to non-standard ports through an IP address assigned from a pool. Most cloud platforms offer a method to start a service with a load-balancer IP address. |
Allows traffic to a specific IP address or address from a pool on the machine network. For bare-metal installations or platforms that are like bare metal, MetalLB provides a way to start a service with a load-balancer IP address. | |
Allows traffic to non-standard ports through a specific IP address. | |
Expose a service on all nodes in the cluster. |
2.1.1. Comparision: Fault tolerant access to external IP addresses Copy linkLink copied to clipboard!
For the communication methods that provide access to an external IP address, fault tolerant access to the IP address is another consideration. The following features provide fault tolerant access to an external IP address.
- IP failover
- IP failover manages a pool of virtual IP address for a set of nodes. It is implemented with Keepalived and Virtual Router Redundancy Protocol (VRRP). IP failover is a layer 2 mechanism only and relies on multicast. Multicast can have disadvantages for some networks.
- MetalLB
- MetalLB has a layer 2 mode, but it does not use multicast. Layer 2 mode has a disadvantage that it transfers all traffic for an external IP address through one node.
- Manually assigning external IP addresses
- You can configure your cluster with an IP address block that is used to assign external IP addresses to services. By default, this feature is disabled. This feature is flexible, but places the largest burden on the cluster or network administrator. The cluster is prepared to receive traffic that is destined for the external IP, but each customer has to decide how they want to route traffic to nodes.
2.2. Configuring ExternalIPs for services Copy linkLink copied to clipboard!
As a cluster administrator, you can select an IP address block that is external to the cluster that can send traffic to services in the cluster.
This functionality is generally most useful for clusters installed on bare-metal hardware.
2.2.1. Prerequisites Copy linkLink copied to clipboard!
- Your network infrastructure must route traffic for the external IP addresses to your cluster.
2.2.2. About ExternalIP Copy linkLink copied to clipboard!
For non-cloud environments, OpenShift Container Platform supports the use of the ExternalIP facility to specify external IP addresses in the spec.externalIPs[]
parameter of the Service
object. A service configured with an ExternalIP functions similarly to a service with type=NodePort
, whereby you traffic directs to a local node for load balancing.
For cloud environments, use the load balancer services for automatic deployment of a cloud load balancer to target the endpoints of a service.
After you specify a value for the parameter, OpenShift Container Platform assigns an additional virtual IP address to the service. The IP address can exist outside of the service network that you defined for your cluster.
Because ExternalIP is disabled by default, enabling the ExternalIP functionality might introduce security risks for the service, because in-cluster traffic to an external IP address is directed to that service. This configuration means that cluster users could intercept sensitive traffic destined for external resources.
You can use either a MetalLB implementation or an IP failover deployment to attach an ExternalIP resource to a service in the following ways:
- Automatic assignment of an external IP
-
OpenShift Container Platform automatically assigns an IP address from the
autoAssignCIDRs
CIDR block to thespec.externalIPs[]
array when you create aService
object withspec.type=LoadBalancer
set. For this configuration, OpenShift Container Platform implements a cloud version of the load balancer service type and assigns IP addresses to the services. Automatic assignment is disabled by default and must be configured by a cluster administrator as described in the "Configuration for ExternalIP" section. - Manual assignment of an external IP
-
OpenShift Container Platform uses the IP addresses assigned to the
spec.externalIPs[]
array when you create aService
object. You cannot specify an IP address that is already in use by another service.
After using either the MetalLB implementation or an IP failover deployment to host external IP address blocks, you must configure your networking infrastructure to ensure that the external IP address blocks are routed to your cluster. This configuration means that the IP address is not configured in the network interfaces from nodes. To handle the traffic, you must configure the routing and access to the external IP by using a method, such as static Address Resolution Protocol (ARP) entries.
OpenShift Container Platform extends the ExternalIP functionality in Kubernetes by adding the following capabilities:
- Restrictions on the use of external IP addresses by users through a configurable policy
- Allocation of an external IP address automatically to a service upon request
2.2.3. Configuration for ExternalIP Copy linkLink copied to clipboard!
The following parameters in the Network.config.openshift.io
custom resource (CR) govern the use of an external IP address in OpenShift Container Platform:
-
spec.externalIP.autoAssignCIDRs
defines an IP address block used by the load balancer when choosing an external IP address for the service. OpenShift Container Platform supports only a single IP address block for automatic assignment. This configuration requires less steps than manually assigning ExternalIPs to services, which requires managing the port space of a limited number of shared IP addresses. If you enable automatic assignment, the Cloud Controller Manager Operator allocates an external IP address to aService
object withspec.type=LoadBalancer
defind in its configuration. -
spec.externalIP.policy
defines the permissible IP address blocks when manually specifying an IP address. OpenShift Container Platform does not apply policy rules to IP address blocks that you defined in thespec.externalIP.autoAssignCIDRs
parameter.
If routed correctly, external traffic from the configured external IP address block can reach service endpoints through any TCP or UDP port that the service exposes.
As a cluster administrator, you must configure routing to externalIPs. You must also ensure that the IP address block you assign terminates at one or more nodes in your cluster. For more information, see Kubernetes External IPs.
OpenShift Container Platform supports both automatic and manual IP address assignment. This support guarantees that each address gets assigned to a maximum of one service and that each service can expose its chosen ports regardless of the ports exposed by other services.
To use IP address blocks defined by autoAssignCIDRs
in OpenShift Container Platform, you must configure the necessary IP address assignment and routing for your host network.
The following YAML describes a service with an external IP address configured:
Example Service
object with spec.externalIPs[]
set
If you run a private cluster on a cloud-provider platform, you can change the publishing scope to internal
for the load balancer of the Ingress Controller by running the following patch
command:
oc -n openshift-ingress-operator patch ingresscontrollers/ingress-controller-with-nlb --type=merge --patch='{"spec":{"endpointPublishingStrategy":{"loadBalancer":{"scope":"Internal"}}}}'
$ oc -n openshift-ingress-operator patch ingresscontrollers/ingress-controller-with-nlb --type=merge --patch='{"spec":{"endpointPublishingStrategy":{"loadBalancer":{"scope":"Internal"}}}}'
After you run this command, the Ingress Controller restricts access to routes for OpenShift Container Platform applications to internal networks only.
2.2.4. Restrictions on the assignment of an external IP address Copy linkLink copied to clipboard!
As a cluster administrator, you can specify IP address blocks to allow and to reject IP addresses for a service. Restrictions apply only to users without cluster-admin
privileges. A cluster administrator can always set the service spec.externalIPs[]
field to any IP address.
You configure an IP address policy by specifying Classless Inter-Domain Routing (CIDR) address blocks for the spec.ExternalIP.policy
parameter in the policy
object.
Example in JSON form of a policy
object and its CIDR parameters
When configuring policy restrictions, the following rules apply:
-
If
policy
is set to{}
, creating aService
object withspec.ExternalIPs[]
results in a failed service. This setting is the default for OpenShift Container Platform. The same behavior exists forpolicy: null
. If
policy
is set and eitherpolicy.allowedCIDRs[]
orpolicy.rejectedCIDRs[]
is set, the following rules apply:-
If
allowedCIDRs[]
andrejectedCIDRs[]
are both set,rejectedCIDRs[]
has precedence overallowedCIDRs[]
. -
If
allowedCIDRs[]
is set, creating aService
object withspec.ExternalIPs[]
succeeds only if the specified IP addresses are allowed. -
If
rejectedCIDRs[]
is set, creating aService
object withspec.ExternalIPs[]
succeeds only if the specified IP addresses are not rejected.
-
If
2.2.5. Example policy objects Copy linkLink copied to clipboard!
The examples in this section show different spec.externalIP.policy
configurations.
In the following example, the policy prevents OpenShift Container Platform from creating any service with a specified external IP address.
Example policy to reject any value specified for
Service
objectspec.externalIPs[]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the following example, both the
allowedCIDRs
andrejectedCIDRs
fields are set.Example policy that includes both allowed and rejected CIDR blocks
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the following example,
policy
is set to{}
. With this configuration, using theoc get networks.config.openshift.io -o yaml
command to view the configuration meanspolicy
parameter does not show on the command output. The same behavior exists forpolicy: null
.Example policy to allow any value specified for
Service
objectspec.externalIPs[]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.2.6. ExternalIP address block configuration Copy linkLink copied to clipboard!
The configuration for ExternalIP address blocks is defined by a Network custom resource (CR) named cluster
. The Network CR is part of the config.openshift.io
API group.
During cluster installation, the Cluster Version Operator (CVO) automatically creates a Network CR named cluster
. Creating any other CR objects of this type is not supported.
The following YAML describes the ExternalIP configuration:
Network.config.openshift.io CR named cluster
- 1
- Defines the IP address block in CIDR format that is available for automatic assignment of external IP addresses to a service. Only a single IP address range is allowed.
- 2
- Defines restrictions on manual assignment of an IP address to a service. If no restrictions are defined, specifying the
spec.externalIP
field in aService
object is not allowed. By default, no restrictions are defined.
The following YAML describes the fields for the policy
stanza:
Network.config.openshift.io policy
stanza
policy: allowedCIDRs: [] rejectedCIDRs: []
policy:
allowedCIDRs: []
rejectedCIDRs: []
2.2.6.1. Example external IP configurations Copy linkLink copied to clipboard!
Several possible configurations for external IP address pools are displayed in the following examples:
The following YAML describes a configuration that enables automatically assigned external IP addresses:
Example configuration with
spec.externalIP.autoAssignCIDRs
setCopy to Clipboard Copied! Toggle word wrap Toggle overflow The following YAML configures policy rules for the allowed and rejected CIDR ranges:
Example configuration with
spec.externalIP.policy
setCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.2.7. Configure external IP address blocks for your cluster Copy linkLink copied to clipboard!
As a cluster administrator, you can configure the following ExternalIP settings:
-
An ExternalIP address block used by OpenShift Container Platform to automatically populate the
spec.clusterIP
field for aService
object. -
A policy object to restrict what IP addresses may be manually assigned to the
spec.clusterIP
array of aService
object.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Access to the cluster as a user with the
cluster-admin
role.
Procedure
Optional: To display the current external IP configuration, enter the following command:
oc describe networks.config cluster
$ oc describe networks.config cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To edit the configuration, enter the following command:
oc edit networks.config cluster
$ oc edit networks.config cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Modify the ExternalIP configuration, as in the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the configuration for the
externalIP
stanza.
To confirm the updated ExternalIP configuration, enter the following command:
oc get networks.config cluster -o go-template='{{.spec.externalIP}}{{"\n"}}'
$ oc get networks.config cluster -o go-template='{{.spec.externalIP}}{{"\n"}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.2.9. Next steps Copy linkLink copied to clipboard!
2.3. Configuring ingress cluster traffic using an Ingress Controller Copy linkLink copied to clipboard!
OpenShift Container Platform provides methods for communicating from outside the cluster with services running in the cluster. This method uses an Ingress Controller.
2.3.1. Using Ingress Controllers and routes Copy linkLink copied to clipboard!
The Ingress Operator manages Ingress Controllers and wildcard DNS.
Using an Ingress Controller is the most common way to allow external access to an OpenShift Container Platform cluster.
An Ingress Controller is configured to accept external requests and proxy them based on the configured routes. This is limited to HTTP, HTTPS using SNI, and TLS using SNI, which is sufficient for web applications and services that work over TLS with SNI.
Work with your administrator to configure an Ingress Controller to accept external requests and proxy them based on the configured routes.
The administrator can create a wildcard DNS entry and then set up an Ingress Controller. Then, you can work with the edge Ingress Controller without having to contact the administrators.
By default, every Ingress Controller in the cluster can admit any route created in any project in the cluster.
The Ingress Controller:
- Has two replicas by default, which means it should be running on two worker nodes.
- Can be scaled up to have more replicas on more nodes.
The procedures in this section require prerequisites performed by the cluster administrator.
2.3.2. Prerequisites Copy linkLink copied to clipboard!
Before starting the following procedures, the administrator must:
- Set up the external port to the cluster networking environment so that requests can reach the cluster.
Make sure there is at least one user with cluster admin role. To add this role to a user, run the following command:
oc adm policy add-cluster-role-to-user cluster-admin username
$ oc adm policy add-cluster-role-to-user cluster-admin username
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - You have an OpenShift Container Platform cluster with at least one master and at least one node and a system outside the cluster that has network access to the cluster. This procedure assumes that the external system is on the same subnet as the cluster. The additional networking required for external systems on a different subnet is out-of-scope for this topic.
2.3.3. Creating a project and service Copy linkLink copied to clipboard!
If the project and service that you want to expose does not exist, create the project and then create the service.
If the project and service already exists, skip to the procedure on exposing the service to create a route.
Prerequisites
-
Install the OpenShift CLI (
oc
) and log in as a cluster administrator.
Procedure
Create a new project for your service by running the
oc new-project
command:oc new-project <project_name>
$ oc new-project <project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
oc new-app
command to create your service:oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git
$ oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify that the service was created, run the following command:
oc get svc -n <project_name>
$ oc get svc -n <project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteBy default, the new service does not have an external IP address.
2.3.4. Exposing the service by creating a route Copy linkLink copied to clipboard!
You can expose the service as a route by using the oc expose
command.
Prerequisites
- You logged into OpenShift Container Platform.
Procedure
Log in to the project where the service you want to expose is located:
oc project <project_name>
$ oc project <project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
oc expose service
command to expose the route:oc expose service nodejs-ex
$ oc expose service nodejs-ex
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
route.route.openshift.io/nodejs-ex exposed
route.route.openshift.io/nodejs-ex exposed
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify that the service is exposed, you can use a tool, such as
curl
to check that the service is accessible from outside the cluster.To find the hostname of the route, enter the following command:
oc get route
$ oc get route
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nodejs-ex nodejs-ex-myproject.example.com nodejs-ex 8080-tcp None
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nodejs-ex nodejs-ex-myproject.example.com nodejs-ex 8080-tcp None
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To check that the host responds to a GET request, enter the following command:
Example
curl
commandcurl --head nodejs-ex-myproject.example.com
$ curl --head nodejs-ex-myproject.example.com
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
HTTP/1.1 200 OK ...
HTTP/1.1 200 OK ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3.5. Ingress sharding in OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform, an Ingress Controller can serve all routes, or it can serve a subset of routes. By default, the Ingress Controller serves any route created in any namespace in the cluster. You can add additional Ingress Controllers to your cluster to optimize routing by creating shards, which are subsets of routes based on selected characteristics. To mark a route as a member of a shard, use labels in the route or namespace metadata
field. The Ingress Controller uses selectors, also known as a selection expression, to select a subset of routes from the entire pool of routes to serve.
Ingress sharding is useful in cases where you want to load balance incoming traffic across multiple Ingress Controllers, when you want to isolate traffic to be routed to a specific Ingress Controller, or for a variety of other reasons described in the next section.
By default, each route uses the default domain of the cluster. However, routes can be configured to use the domain of the router instead.
2.3.6. Ingress Controller sharding Copy linkLink copied to clipboard!
You can use Ingress sharding, also known as router sharding, to distribute a set of routes across multiple routers by adding labels to routes, namespaces, or both. The Ingress Controller uses a corresponding set of selectors to admit only the routes that have a specified label. Each Ingress shard comprises the routes that are filtered by using a given selection expression.
As the primary mechanism for traffic to enter the cluster, the demands on the Ingress Controller can be significant. As a cluster administrator, you can shard the routes to:
- Balance Ingress Controllers, or routers, with several routes to accelerate responses to changes.
- Assign certain routes to have different reliability guarantees than other routes.
- Allow certain Ingress Controllers to have different policies defined.
- Allow only specific routes to use additional features.
- Expose different routes on different addresses so that internal and external users can see different routes, for example.
- Transfer traffic from one version of an application to another during a blue-green deployment.
When Ingress Controllers are sharded, a given route is admitted to zero or more Ingress Controllers in the group. The status of a route describes whether an Ingress Controller has admitted the route. An Ingress Controller only admits a route if the route is unique to a shard.
With sharding, you can distribute subsets of routes over multiple Ingress Controllers. These subsets can be nonoverlapping, also called traditional sharding, or overlapping, otherwise known as overlapped sharding.
The following table outlines three sharding methods:
Sharding method | Description |
---|---|
Namespace selector | After you add a namespace selector to the Ingress Controller, all routes in a namespace that have matching labels for the namespace selector are included in the Ingress shard. Consider this method when an Ingress Controller serves all routes created in a namespace. |
Route selector | After you add a route selector to the Ingress Controller, all routes with labels that match the route selector are included in the Ingress shard. Consider this method when you want an Ingress Controller to serve only a subset of routes or a specific route in a namespace. |
Namespace and route selectors | Provides your Ingress Controller scope for both namespace selector and route selector methods. Consider this method when you want the flexibility of both the namespace selector and the route selector methods. |
2.3.6.1. Traditional sharding example Copy linkLink copied to clipboard!
An example of a configured Ingress Controller finops-router
that has the label selector spec.namespaceSelector.matchExpressions
with key values set to finance
and ops
:
Example YAML definition for finops-router
An example of a configured Ingress Controller dev-router
that has the label selector spec.namespaceSelector.matchLabels.name
with the key value set to dev
:
Example YAML definition for dev-router
If all application routes are in separate namespaces, such as each labeled with name:finance
, name:ops
, and name:dev
, the configuration effectively distributes your routes between the two Ingress Controllers. OpenShift Container Platform routes for console, authentication, and other purposes should not be handled.
In the previous scenario, sharding becomes a special case of partitioning, with no overlapping subsets. Routes are divided between router shards.
The default
Ingress Controller continues to serve all routes unless the namespaceSelector
or routeSelector
fields contain routes that are meant for exclusion. See this Red Hat Knowledgebase solution and the section "Sharding the default Ingress Controller" for more information on how to exclude routes from the default Ingress Controller.
2.3.6.2. Overlapped sharding example Copy linkLink copied to clipboard!
An example of a configured Ingress Controller devops-router
that has the label selector spec.namespaceSelector.matchExpressions
with key values set to dev
and ops
:
Example YAML definition for devops-router
The routes in the namespaces labeled name:dev
and name:ops
are now serviced by two different Ingress Controllers. With this configuration, you have overlapping subsets of routes.
With overlapping subsets of routes you can create more complex routing rules. For example, you can divert higher priority traffic to the dedicated finops-router
while sending lower priority traffic to devops-router
.
2.3.6.3. Sharding the default Ingress Controller Copy linkLink copied to clipboard!
After creating a new Ingress shard, there might be routes that are admitted to your new Ingress shard that are also admitted by the default Ingress Controller. This is because the default Ingress Controller has no selectors and admits all routes by default.
You can restrict an Ingress Controller from servicing routes with specific labels using either namespace selectors or route selectors. The following procedure restricts the default Ingress Controller from servicing your newly sharded finance
, ops
, and dev
, routes using a namespace selector. This adds further isolation to Ingress shards.
You must keep all of OpenShift Container Platform’s administration routes on the same Ingress Controller. Therefore, avoid adding additional selectors to the default Ingress Controller that exclude these essential routes.
Prerequisites
-
You installed the OpenShift CLI (
oc
). - You are logged in as a project administrator.
Procedure
Modify the default Ingress Controller by running the following command:
oc edit ingresscontroller -n openshift-ingress-operator default
$ oc edit ingresscontroller -n openshift-ingress-operator default
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the Ingress Controller to contain a
namespaceSelector
that excludes the routes with any of thefinance
,ops
, anddev
labels:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The default Ingress Controller will no longer serve the namespaces labeled name:finance
, name:ops
, and name:dev
.
2.3.6.4. Ingress sharding and DNS Copy linkLink copied to clipboard!
The cluster administrator is responsible for making a separate DNS entry for each router in a project. A router will not forward unknown routes to another router.
Consider the following example:
-
Router A lives on host 192.168.0.5 and has routes with
*.foo.com
. -
Router B lives on host 192.168.1.9 and has routes with
*.example.com
.
Separate DNS entries must resolve *.foo.com
to the node hosting Router A and *.example.com
to the node hosting Router B:
-
*.foo.com A IN 192.168.0.5
-
*.example.com A IN 192.168.1.9
2.3.6.5. Configuring Ingress Controller sharding by using route labels Copy linkLink copied to clipboard!
Ingress Controller sharding by using route labels means that the Ingress Controller serves any route in any namespace that is selected by the route selector.
Figure 2.1. Ingress sharding using route labels
Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another.
Procedure
Edit the
router-internal.yaml
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify a domain to be used by the Ingress Controller. This domain must be different from the default Ingress Controller domain.
Apply the Ingress Controller
router-internal.yaml
file:oc apply -f router-internal.yaml
# oc apply -f router-internal.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The Ingress Controller selects routes in any namespace that have the label
type: sharded
.Create a new route using the domain configured in the
router-internal.yaml
:oc expose svc <service-name> --hostname <route-name>.apps-sharded.basedomain.example.net
$ oc expose svc <service-name> --hostname <route-name>.apps-sharded.basedomain.example.net
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3.6.6. Configuring Ingress Controller sharding by using namespace labels Copy linkLink copied to clipboard!
Ingress Controller sharding by using namespace labels means that the Ingress Controller serves any route in any namespace that is selected by the namespace selector.
Figure 2.2. Ingress sharding using namespace labels
Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another.
Procedure
Edit the
router-internal.yaml
file:cat router-internal.yaml
$ cat router-internal.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify a domain to be used by the Ingress Controller. This domain must be different from the default Ingress Controller domain.
Apply the Ingress Controller
router-internal.yaml
file:oc apply -f router-internal.yaml
$ oc apply -f router-internal.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The Ingress Controller selects routes in any namespace that is selected by the namespace selector that have the label
type: sharded
.Create a new route using the domain configured in the
router-internal.yaml
:oc expose svc <service-name> --hostname <route-name>.apps-sharded.basedomain.example.net
$ oc expose svc <service-name> --hostname <route-name>.apps-sharded.basedomain.example.net
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3.6.7. Creating a route for Ingress Controller sharding Copy linkLink copied to clipboard!
A route allows you to host your application at a URL. Ingress Controller sharding helps balance incoming traffic load among a set of Ingress Controllers. It can also isolate traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another.
The following procedure describes how to create a route for Ingress Controller sharding, using the hello-openshift
application as an example.
Prerequisites
-
You installed the OpenShift CLI (
oc
). - You are logged in as a project administrator.
- You have a web application that exposes a port and an HTTP or TLS endpoint listening for traffic on the port.
- You have configured the Ingress Controller for sharding.
Procedure
Create a project called
hello-openshift
by running the following command:oc new-project hello-openshift
$ oc new-project hello-openshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a pod in the project by running the following command:
oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json
$ oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a service called
hello-openshift
by running the following command:oc expose pod/hello-openshift
$ oc expose pod/hello-openshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a route definition called
hello-openshift-route.yaml
:YAML definition of the created route for sharding
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Both the label key and its corresponding label value must match the ones specified in the Ingress Controller. In this example, the Ingress Controller has the label key and value
type: sharded
. - 2
- The route will be exposed using the value of the
subdomain
field. When you specify thesubdomain
field, you must leave the hostname unset. If you specify both thehost
andsubdomain
fields, then the route will use the value of thehost
field, and ignore thesubdomain
field.
Use
hello-openshift-route.yaml
to create a route to thehello-openshift
application by running the following command:oc -n hello-openshift create -f hello-openshift-route.yaml
$ oc -n hello-openshift create -f hello-openshift-route.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Get the status of the route with the following command:
oc -n hello-openshift get routes/hello-openshift-edge -o yaml
$ oc -n hello-openshift get routes/hello-openshift-edge -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The resulting
Route
resource should look similar to the following:Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The hostname the Ingress Controller, or router, uses to expose the route. The value of the
host
field is automatically determined by the Ingress Controller, and uses its domain. In this example, the domain of the Ingress Controller is<apps-sharded.basedomain.example.net>
. - 2
- The hostname of the Ingress Controller. If the hostname is not set, the route can use a subdomain instead. When you specify a subdomain, you automatically use the domain of the Ingress Controller that exposes the route. When a route is exposed by multiple Ingress Controllers, the route is hosted at multiple URLs.
- 3
- The name of the Ingress Controller. In this example, the Ingress Controller has the name
sharded
.
2.3.6.8. Additional resources Copy linkLink copied to clipboard!
2.4. Configuring the Ingress Controller endpoint publishing strategy Copy linkLink copied to clipboard!
The endpointPublishingStrategy
is used to publish the Ingress Controller endpoints to other networks, enable load balancer integrations, and provide access to other systems.
On Red Hat OpenStack Platform (RHOSP), the LoadBalancerService
endpoint publishing strategy is supported only if a cloud provider is configured to create health monitors. For RHOSP 16.2, this strategy is possible only if you use the Amphora Octavia provider.
For more information, see the "Setting RHOSP Cloud Controller Manager options" section of the RHOSP installation documentation.
2.4.1. Ingress Controller endpoint publishing strategy Copy linkLink copied to clipboard!
NodePortService
endpoint publishing strategy
The NodePortService
endpoint publishing strategy publishes the Ingress Controller using a Kubernetes NodePort service.
In this configuration, the Ingress Controller deployment uses container networking. A NodePortService
is created to publish the deployment. The specific node ports are dynamically allocated by OpenShift Container Platform; however, to support static port allocations, your changes to the node port field of the managed NodePortService
are preserved.
Figure 2.3. Diagram of NodePortService
The preceding graphic shows the following concepts pertaining to OpenShift Container Platform Ingress NodePort endpoint publishing strategy:
- All the available nodes in the cluster have their own, externally accessible IP addresses. The service running in the cluster is bound to the unique NodePort for all the nodes.
-
When the client connects to a node that is down, for example, by connecting the
10.0.128.4
IP address in the graphic, the node port directly connects the client to an available node that is running the service. In this scenario, no load balancing is required. As the image shows, the10.0.128.4
address is down and another IP address must be used instead.
The Ingress Operator ignores any updates to .spec.ports[].nodePort
fields of the service.
By default, ports are allocated automatically and you can access the port allocations for integrations. However, sometimes static port allocations are necessary to integrate with existing infrastructure which may not be easily reconfigured in response to dynamic ports. To achieve integrations with static node ports, you can update the managed service resource directly.
For more information, see the Kubernetes Services documentation on NodePort
.
HostNetwork
endpoint publishing strategy
The HostNetwork
endpoint publishing strategy publishes the Ingress Controller on node ports where the Ingress Controller is deployed.
An Ingress Controller with the HostNetwork
endpoint publishing strategy can have only one pod replica per node. If you want n replicas, you must use at least n nodes where those replicas can be scheduled. Because each pod replica requests ports 80
and 443
on the node host where it is scheduled, a replica cannot be scheduled to a node if another pod on the same node is using those ports.
The HostNetwork
object has a hostNetwork
field with the following default values for the optional binding ports: httpPort: 80
, httpsPort: 443
, and statsPort: 1936
. By specifying different binding ports for your network, you can deploy multiple Ingress Controllers on the same node for the HostNetwork
strategy.
Example
2.4.1.1. Configuring the Ingress Controller endpoint publishing scope to Internal Copy linkLink copied to clipboard!
When a cluster administrator installs a new cluster without specifying that the cluster is private, the default Ingress Controller is created with a scope
set to External
. Cluster administrators can change an External
scoped Ingress Controller to Internal
.
Prerequisites
-
You installed the
oc
CLI.
Procedure
To change an
External
scoped Ingress Controller toInternal
, enter the following command:oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{"spec":{"endpointPublishingStrategy":{"type":"LoadBalancerService","loadBalancer":{"scope":"Internal"}}}}'
$ oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{"spec":{"endpointPublishingStrategy":{"type":"LoadBalancerService","loadBalancer":{"scope":"Internal"}}}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To check the status of the Ingress Controller, enter the following command:
oc -n openshift-ingress-operator get ingresscontrollers/default -o yaml
$ oc -n openshift-ingress-operator get ingresscontrollers/default -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
Progressing
status condition indicates whether you must take further action. For example, the status condition can indicate that you need to delete the service by entering the following command:oc -n openshift-ingress delete services/router-default
$ oc -n openshift-ingress delete services/router-default
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you delete the service, the Ingress Operator recreates it as
Internal
.
2.4.1.2. Configuring the Ingress Controller endpoint publishing scope to External Copy linkLink copied to clipboard!
When a cluster administrator installs a new cluster without specifying that the cluster is private, the default Ingress Controller is created with a scope
set to External
.
The Ingress Controller’s scope can be configured to be Internal
during installation or after, and cluster administrators can change an Internal
Ingress Controller to External
.
On some platforms, it is necessary to delete and recreate the service.
Changing the scope can cause disruption to Ingress traffic, potentially for several minutes. This applies to platforms where it is necessary to delete and recreate the service, because the procedure can cause OpenShift Container Platform to deprovision the existing service load balancer, provision a new one, and update DNS.
Prerequisites
-
You installed the
oc
CLI.
Procedure
To change an
Internal
scoped Ingress Controller toExternal
, enter the following command:oc -n openshift-ingress-operator patch ingresscontrollers/private --type=merge --patch='{"spec":{"endpointPublishingStrategy":{"type":"LoadBalancerService","loadBalancer":{"scope":"External"}}}}'
$ oc -n openshift-ingress-operator patch ingresscontrollers/private --type=merge --patch='{"spec":{"endpointPublishingStrategy":{"type":"LoadBalancerService","loadBalancer":{"scope":"External"}}}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To check the status of the Ingress Controller, enter the following command:
oc -n openshift-ingress-operator get ingresscontrollers/default -o yaml
$ oc -n openshift-ingress-operator get ingresscontrollers/default -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
Progressing
status condition indicates whether you must take further action. For example, the status condition can indicate that you need to delete the service by entering the following command:oc -n openshift-ingress delete services/router-default
$ oc -n openshift-ingress delete services/router-default
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you delete the service, the Ingress Operator recreates it as
External
.
2.4.1.3. Adding a single NodePort service to an Ingress Controller Copy linkLink copied to clipboard!
Instead of creating a NodePort
-type Service
for each project, you can create a custom Ingress Controller to use the NodePortService
endpoint publishing strategy. To prevent port conflicts, consider this configuration for your Ingress Controller when you want to apply a set of routes, through Ingress sharding, to nodes that might already have a HostNetwork
Ingress Controller.
Before you set a NodePort
-type Service
for each project, read the following considerations:
- You must create a wildcard DNS record for the Nodeport Ingress Controller domain. A Nodeport Ingress Controller route can be reached from the address of a worker node. For more information about the required DNS records for routes, see "User-provisioned DNS requirements".
-
You must expose a route for your service and specify the
--hostname
argument for your custom Ingress Controller domain. -
You must append the port that is assigned to the
NodePort
-typeService
in the route so that you can access application pods.
Prerequisites
-
You installed the OpenShift CLI (
oc
). -
Logged in as a user with
cluster-admin
privileges. - You created a wildcard DNS record.
Procedure
Create a custom resource (CR) file for the Ingress Controller:
Example of a CR file that defines information for the
IngressController
objectCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the a custom
name
for theIngressController
CR. - 2
- The DNS name that the Ingress Controller services. As an example, the default ingresscontroller domain is
apps.ipi-cluster.example.com
, so you would specify the<custom_ic_domain_name>
asnodeportsvc.ipi-cluster.example.com
. - 3
- Specify the label for the nodes that include the custom Ingress Controller.
- 4
- Specify the label for a set of namespaces. Substitute
<key>:<value>
with a map of key-value pairs where<key>
is a unique name for the new label and<value>
is its value. For example:ingresscontroller: custom-ic
.
Add a label to a node by using the
oc label node
command:oc label node <node_name> <key>=<value>
$ oc label node <node_name> <key>=<value>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Where
<value>
must match the key-value pair specified in thenodePlacement
section of yourIngressController
CR.
Create the
IngressController
object:oc create -f <ingress_controller_cr>.yaml
$ oc create -f <ingress_controller_cr>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Find the port for the service created for the
IngressController
CR:oc get svc -n openshift-ingress
$ oc get svc -n openshift-ingress
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output that shows port
80:32432/TCP
for therouter-nodeport-custom-ic3
serviceNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-internal-default ClusterIP 172.30.195.74 <none> 80/TCP,443/TCP,1936/TCP 223d router-nodeport-custom-ic3 NodePort 172.30.109.219 <none> 80:32432/TCP,443:31366/TCP,1936:30499/TCP 155m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-internal-default ClusterIP 172.30.195.74 <none> 80/TCP,443/TCP,1936/TCP 223d router-nodeport-custom-ic3 NodePort 172.30.109.219 <none> 80:32432/TCP,443:31366/TCP,1936:30499/TCP 155m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To create a new project, enter the following command:
oc new-project <project_name>
$ oc new-project <project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To label the new namespace, enter the following command:
oc label namespace <project_name> <key>=<value>
$ oc label namespace <project_name> <key>=<value>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Where
<key>=<value>
must match the value in thenamespaceSelector
section of your Ingress Controller CR.
Create a new application in your cluster:
oc new-app --image=<image_name>
$ oc new-app --image=<image_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- An example of
<image_name>
isquay.io/openshifttest/hello-openshift:multiarch
.
Create a
Route
object for a service, so that the pod can use the service to expose the application external to the cluster.oc expose svc/<service_name> --hostname=<svc_name>-<project_name>.<custom_ic_domain_name>
$ oc expose svc/<service_name> --hostname=<svc_name>-<project_name>.<custom_ic_domain_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou must specify the domain name of your custom Ingress Controller in the
--hostname
argument. If you do not do this, the Ingress Operator uses the default Ingress Controller to serve all the routes for your cluster.Check that the route has the
Admitted
status and that it includes metadata for the custom Ingress Controller:oc get route/hello-openshift -o json | jq '.status.ingress'
$ oc get route/hello-openshift -o json | jq '.status.ingress'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the default
IngressController
CR to prevent the default Ingress Controller from managing theNodePort
-typeService
. The default Ingress Controller will continue to monitor all other cluster traffic.oc patch --type=merge -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"namespaceSelector":{"matchExpressions":[{"key":"<key>","operator":"NotIn","values":["<value>]}]}}}'
$ oc patch --type=merge -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"namespaceSelector":{"matchExpressions":[{"key":"<key>","operator":"NotIn","values":["<value>]}]}}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the DNS entry can route inside and outside of your cluster by entering the following command. The command outputs the IP address of the node that received the label from running the
oc label node
command earlier in the procedure.dig +short <svc_name>-<project_name>.<custom_ic_domain_name>
$ dig +short <svc_name>-<project_name>.<custom_ic_domain_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify that your cluster uses the IP addresses from external DNS servers for DNS resolution, check the connection of your cluster by entering the following command:
curl <svc_name>-<project_name>.<custom_ic_domain_name>:<port>
$ curl <svc_name>-<project_name>.<custom_ic_domain_name>:<port>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Output example
Hello OpenShift!
Hello OpenShift!
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5. Configuring ingress cluster traffic using a load balancer Copy linkLink copied to clipboard!
OpenShift Container Platform provides methods for communicating from outside the cluster with services running in the cluster. This method uses a load balancer.
2.5.1. Using a load balancer to get traffic into the cluster Copy linkLink copied to clipboard!
If you do not need a specific external IP address, you can configure a load balancer service to allow external access to an OpenShift Container Platform cluster.
A load balancer service allocates a unique IP. The load balancer has a single edge router IP, which can be a virtual IP (VIP), but is still a single machine for initial load balancing.
If a pool is configured, it is done at the infrastructure level, not by a cluster administrator.
The procedures in this section require prerequisites performed by the cluster administrator.
2.5.2. Prerequisites Copy linkLink copied to clipboard!
Before starting the following procedures, the administrator must:
- Set up the external port to the cluster networking environment so that requests can reach the cluster.
Make sure there is at least one user with cluster admin role. To add this role to a user, run the following command:
oc adm policy add-cluster-role-to-user cluster-admin username
$ oc adm policy add-cluster-role-to-user cluster-admin username
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Have an OpenShift Container Platform cluster with at least one master and at least one node and a system outside the cluster that has network access to the cluster. This procedure assumes that the external system is on the same subnet as the cluster. The additional networking required for external systems on a different subnet is out-of-scope for this topic.
2.5.3. Creating a project and service Copy linkLink copied to clipboard!
If the project and service that you want to expose does not exist, create the project and then create the service.
If the project and service already exists, skip to the procedure on exposing the service to create a route.
Prerequisites
-
Install the OpenShift CLI (
oc
) and log in as a cluster administrator.
Procedure
Create a new project for your service by running the
oc new-project
command:oc new-project <project_name>
$ oc new-project <project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
oc new-app
command to create your service:oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git
$ oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify that the service was created, run the following command:
oc get svc -n <project_name>
$ oc get svc -n <project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteBy default, the new service does not have an external IP address.
2.5.4. Exposing the service by creating a route Copy linkLink copied to clipboard!
You can expose the service as a route by using the oc expose
command.
Prerequisites
- You logged into OpenShift Container Platform.
Procedure
Log in to the project where the service you want to expose is located:
oc project <project_name>
$ oc project <project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
oc expose service
command to expose the route:oc expose service nodejs-ex
$ oc expose service nodejs-ex
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
route.route.openshift.io/nodejs-ex exposed
route.route.openshift.io/nodejs-ex exposed
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify that the service is exposed, you can use a tool, such as
curl
to check that the service is accessible from outside the cluster.To find the hostname of the route, enter the following command:
oc get route
$ oc get route
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nodejs-ex nodejs-ex-myproject.example.com nodejs-ex 8080-tcp None
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nodejs-ex nodejs-ex-myproject.example.com nodejs-ex 8080-tcp None
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To check that the host responds to a GET request, enter the following command:
Example
curl
commandcurl --head nodejs-ex-myproject.example.com
$ curl --head nodejs-ex-myproject.example.com
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
HTTP/1.1 200 OK ...
HTTP/1.1 200 OK ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5.5. Creating a load balancer service Copy linkLink copied to clipboard!
Use the following procedure to create a load balancer service.
Prerequisites
- Make sure that the project and service you want to expose exist.
- Your cloud provider supports load balancers.
Procedure
To create a load balancer service:
- Log in to OpenShift Container Platform.
Load the project where the service you want to expose is located.
oc project project1
$ oc project project1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Open a text file on the control plane node and paste the following text, editing the file as needed:
Sample load balancer configuration file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Enter a descriptive name for the load balancer service.
- 2
- Enter the same port that the service you want to expose is listening on.
- 3
- Enter a list of specific IP addresses to restrict traffic through the load balancer. This field is ignored if the cloud-provider does not support the feature.
- 4
- Enter
Loadbalancer
as the type. - 5
- Enter the name of the service.
NoteTo restrict the traffic through the load balancer to specific IP addresses, it is recommended to use the Ingress Controller field
spec.endpointPublishingStrategy.loadBalancer.allowedSourceRanges
. Do not set theloadBalancerSourceRanges
field.- Save and exit the file.
Run the following command to create the service:
oc create -f <file-name>
$ oc create -f <file-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc create -f mysql-lb.yaml
$ oc create -f mysql-lb.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to view the new service:
oc get svc
$ oc get svc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE egress-2 LoadBalancer 172.30.22.226 ad42f5d8b303045-487804948.example.com 3306:30357/TCP 15m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE egress-2 LoadBalancer 172.30.22.226 ad42f5d8b303045-487804948.example.com 3306:30357/TCP 15m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The service has an external IP address automatically assigned if there is a cloud provider enabled.
On the master, use a tool, such as cURL, to make sure you can reach the service using the public IP address:
curl <public-ip>:<port>
$ curl <public-ip>:<port>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
curl 172.29.121.74:3306
$ curl 172.29.121.74:3306
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The examples in this section use a MySQL service, which requires a client application. If you get a string of characters with the
Got packets out of order
message, you are connecting with the service:If you have a MySQL client, log in with the standard CLI command:
mysql -h 172.30.131.89 -u admin -p
$ mysql -h 172.30.131.89 -u admin -p
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. MySQL [(none)]>
Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. MySQL [(none)]>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.6. Configuring ingress cluster traffic on AWS Copy linkLink copied to clipboard!
OpenShift Container Platform provides methods for communicating from outside the cluster with services running in the cluster. This method uses load balancers on AWS, specifically a Network Load Balancer (NLB) or a Classic Load Balancer (CLB). Both types of load balancers can forward the client’s IP address to the node, but a CLB requires proxy protocol support, which OpenShift Container Platform automatically enables.
There are two ways to configure an Ingress Controller to use an NLB:
-
By force replacing the Ingress Controller that is currently using a CLB. This deletes the
IngressController
object and an outage will occur while the new DNS records propagate and the NLB is being provisioned. -
By editing an existing Ingress Controller that uses a CLB to use an NLB. This changes the load balancer without having to delete and recreate the
IngressController
object.
Both methods can be used to switch from an NLB to a CLB.
You can configure these load balancers on a new or existing AWS cluster.
2.6.1. Configuring Classic Load Balancer timeouts on AWS Copy linkLink copied to clipboard!
OpenShift Container Platform provides a method for setting a custom timeout period for a specific route or Ingress Controller. Additionally, an AWS Classic Load Balancer (CLB) has its own timeout period with a default time of 60 seconds.
If the timeout period of the CLB is shorter than the route timeout or Ingress Controller timeout, the load balancer can prematurely terminate the connection. You can prevent this problem by increasing both the timeout period of the route and CLB.
2.6.1.1. Configuring route timeouts Copy linkLink copied to clipboard!
You can configure the default timeouts for an existing route when you have services in need of a low timeout, which is required for Service Level Availability (SLA) purposes, or a high timeout, for cases with a slow back end.
If you configured a user-managed external load balancer in front of your OpenShift Container Platform cluster, ensure that the timeout value for the user-managed external load balancer is higher than the timeout value for the route. This configuration prevents network congestion issues over the network that your cluster uses.
Prerequisites
- You need a deployed Ingress Controller on a running cluster.
Procedure
Using the
oc annotate
command, add the timeout to the route:oc annotate route <route_name> \ --overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit>
$ oc annotate route <route_name> \ --overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Supported time units are microseconds (us), milliseconds (ms), seconds (s), minutes (m), hours (h), or days (d).
The following example sets a timeout of two seconds on a route named
myroute
:oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s
$ oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.6.1.2. Configuring Classic Load Balancer timeouts Copy linkLink copied to clipboard!
You can configure the default timeouts for a Classic Load Balancer (CLB) to extend idle connections.
Prerequisites
- You must have a deployed Ingress Controller on a running cluster.
Procedure
Set an AWS connection idle timeout of five minutes for the default
ingresscontroller
by running the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Restore the default value of the timeout by running the following command:
oc -n openshift-ingress-operator patch ingresscontroller/default \ --type=merge --patch='{"spec":{"endpointPublishingStrategy": \ {"loadBalancer":{"providerParameters":{"aws":{"classicLoadBalancer": \ {"connectionIdleTimeout":null}}}}}}}'
$ oc -n openshift-ingress-operator patch ingresscontroller/default \ --type=merge --patch='{"spec":{"endpointPublishingStrategy": \ {"loadBalancer":{"providerParameters":{"aws":{"classicLoadBalancer": \ {"connectionIdleTimeout":null}}}}}}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You must specify the scope
field when you change the connection timeout value unless the current scope is already set. When you set the scope
field, you do not need to do so again if you restore the default timeout value.
2.6.2. Configuring ingress cluster traffic on AWS using a Network Load Balancer Copy linkLink copied to clipboard!
OpenShift Container Platform provides methods for communicating from outside the cluster with services that run in the cluster. One such method uses a Network Load Balancer (NLB). You can configure an NLB on a new or existing AWS cluster.
2.6.2.1. Switching the Ingress Controller from using a Classic Load Balancer to a Network Load Balancer Copy linkLink copied to clipboard!
You can switch the Ingress Controller that is using a Classic Load Balancer (CLB) to one that uses a Network Load Balancer (NLB) on AWS.
Switching between these load balancers will not delete the IngressController
object.
This procedure might cause the following issues:
- An outage that can last several minutes due to new DNS records propagation, new load balancers provisioning, and other factors. IP addresses and canonical names of the Ingress Controller load balancer might change after applying this procedure.
- Leaked load balancer resources due to a change in the annotation of the service.
Procedure
Modify the existing Ingress Controller that you want to switch to using an NLB. This example assumes that your default Ingress Controller has an
External
scope and no other customizations:Example
ingresscontroller.yaml
fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you do not specify a value for the
spec.endpointPublishingStrategy.loadBalancer.providerParameters.aws.type
field, the Ingress Controller uses thespec.loadBalancer.platform.aws.type
value from the clusterIngress
configuration that was set during installation.TipIf your Ingress Controller has other customizations that you want to update, such as changing the domain, consider force replacing the Ingress Controller definition file instead.
Apply the changes to the Ingress Controller YAML file by running the command:
oc apply -f ingresscontroller.yaml
$ oc apply -f ingresscontroller.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expect several minutes of outages while the Ingress Controller updates.
2.6.2.2. Switching the Ingress Controller from using a Network Load Balancer to a Classic Load Balancer Copy linkLink copied to clipboard!
You can switch the Ingress Controller that is using a Network Load Balancer (NLB) to one that uses a Classic Load Balancer (CLB) on AWS.
Switching between these load balancers will not delete the IngressController
object.
This procedure might cause an outage that can last several minutes due to new DNS records propagation, new load balancers provisioning, and other factors. IP addresses and canonical names of the Ingress Controller load balancer might change after applying this procedure.
Procedure
Modify the existing Ingress Controller that you want to switch to using a CLB. This example assumes that your default Ingress Controller has an
External
scope and no other customizations:Example
ingresscontroller.yaml
fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you do not specify a value for the
spec.endpointPublishingStrategy.loadBalancer.providerParameters.aws.type
field, the Ingress Controller uses thespec.loadBalancer.platform.aws.type
value from the clusterIngress
configuration that was set during installation.TipIf your Ingress Controller has other customizations that you want to update, such as changing the domain, consider force replacing the Ingress Controller definition file instead.
Apply the changes to the Ingress Controller YAML file by running the command:
oc apply -f ingresscontroller.yaml
$ oc apply -f ingresscontroller.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expect several minutes of outages while the Ingress Controller updates.
2.6.2.3. Replacing Ingress Controller Classic Load Balancer with Network Load Balancer Copy linkLink copied to clipboard!
You can replace an Ingress Controller that is using a Classic Load Balancer (CLB) with one that uses a Network Load Balancer (NLB) on AWS.
This procedure might cause the following issues:
- An outage that can last several minutes due to new DNS records propagation, new load balancers provisioning, and other factors. IP addresses and canonical names of the Ingress Controller load balancer might change after applying this procedure.
- Leaked load balancer resources due to a change in the annotation of the service.
Procedure
Create a file with a new default Ingress Controller. The following example assumes that your default Ingress Controller has an
External
scope and no other customizations:Example
ingresscontroller.yml
fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow If your default Ingress Controller has other customizations, ensure that you modify the file accordingly.
TipIf your Ingress Controller has no other customizations and you are only updating the load balancer type, consider following the procedure detailed in "Switching the Ingress Controller from using a Classic Load Balancer to a Network Load Balancer".
Force replace the Ingress Controller YAML file:
oc replace --force --wait -f ingresscontroller.yml
$ oc replace --force --wait -f ingresscontroller.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until the Ingress Controller is replaced. Expect several of minutes of outages.
2.6.2.4. Configuring an Ingress Controller Network Load Balancer on an existing AWS cluster Copy linkLink copied to clipboard!
You can create an Ingress Controller backed by an AWS Network Load Balancer (NLB) on an existing cluster.
Prerequisites
- You must have an installed AWS cluster.
PlatformStatus
of the infrastructure resource must be AWS.To verify that the
PlatformStatus
is AWS, run:oc get infrastructure/cluster -o jsonpath='{.status.platformStatus.type}'
$ oc get infrastructure/cluster -o jsonpath='{.status.platformStatus.type}' AWS
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Create an Ingress Controller backed by an AWS NLB on an existing cluster.
Create the Ingress Controller manifest:
cat ingresscontroller-aws-nlb.yaml
$ cat ingresscontroller-aws-nlb.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
$my_ingress_controller
with a unique name for the Ingress Controller. - 2
- Replace
$my_unique_ingress_domain
with a domain name that is unique among all Ingress Controllers in the cluster. This variable must be a subdomain of the DNS name<clustername>.<domain>
. - 3
- You can replace
External
withInternal
to use an internal NLB.
Create the resource in the cluster:
oc create -f ingresscontroller-aws-nlb.yaml
$ oc create -f ingresscontroller-aws-nlb.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Before you can configure an Ingress Controller NLB on a new AWS cluster, you must complete the Creating the installation configuration file procedure.
2.6.2.5. Configuring an Ingress Controller Network Load Balancer on a new AWS cluster Copy linkLink copied to clipboard!
You can create an Ingress Controller backed by an AWS Network Load Balancer (NLB) on a new cluster.
Prerequisites
-
Create the
install-config.yaml
file and complete any modifications to it.
Procedure
Create an Ingress Controller backed by an AWS NLB on a new cluster.
Change to the directory that contains the installation program and create the manifests:
./openshift-install create manifests --dir <installation_directory>
$ ./openshift-install create manifests --dir <installation_directory>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
<installation_directory>
, specify the name of the directory that contains theinstall-config.yaml
file for your cluster.
Create a file that is named
cluster-ingress-default-ingresscontroller.yaml
in the<installation_directory>/manifests/
directory:touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml
$ touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
<installation_directory>
, specify the directory name that contains themanifests/
directory for your cluster.
After creating the file, several network configuration files are in the
manifests/
directory, as shown:ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml
$ ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
cluster-ingress-default-ingresscontroller.yaml
cluster-ingress-default-ingresscontroller.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Open the
cluster-ingress-default-ingresscontroller.yaml
file in an editor and enter a custom resource (CR) that describes the Operator configuration you want:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save the
cluster-ingress-default-ingresscontroller.yaml
file and quit the text editor. -
Optional: Back up the
manifests/cluster-ingress-default-ingresscontroller.yaml
file. The installation program deletes themanifests/
directory when creating the cluster.
2.6.2.6. Choosing subnets while creating a LoadBalancerService Ingress Controller Copy linkLink copied to clipboard!
You can manually specify load balancer subnets for Ingress Controllers in an existing cluster. By default, the load balancer subnets are automatically discovered by AWS, but specifying them in the Ingress Controller overrides this, allowing for manual control.
Prerequisites
- You must have an installed AWS cluster.
-
You must know the names or IDs of the subnets to which you intend to map your
IngressController
.
Procedure
Create a custom resource (CR) file.
Create a YAML file (e.g.,
sample-ingress.yaml
) with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a custom resource (CR) file.
Add subnets to your YAML file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<name>
with a name for theIngressController
. - 2
- Replace
<domain>
with the DNS name serviced by theIngressController
. - 3
- You can also use the
networkLoadBalancer
field if using an NLB. - 4
- You can optionally specify a subnet by name using the
names
field instead of specifying the subnet by ID. - 5
- Specify subnet IDs (or names if you using
names
).ImportantYou can specify a maximum of one subnet per availability zone. Only provide public subnets for external Ingress Controllers and private subnets for internal Ingress Controllers.
Apply the CR file.
Save the file and apply it using the OpenShift CLI (
oc
).oc apply -f sample-ingress.yaml
$ oc apply -f sample-ingress.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm the load balancer was provisioned successfully by checking the
IngressController
conditions.oc get ingresscontroller -n openshift-ingress-operator <name> -o jsonpath="{.status.conditions}" | yq -PC
$ oc get ingresscontroller -n openshift-ingress-operator <name> -o jsonpath="{.status.conditions}" | yq -PC
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.6.2.7. Updating the subnets on an existing Ingress Controller Copy linkLink copied to clipboard!
You can update an IngressController
with manually specified load balancer subnets in OpenShift Container Platform to avoid any disruptions, to maintain the stability of your services, and to ensure that your network configuration aligns with your specific requirements. The following procedures show you how to select and apply new subnets, verify the configuration changes, and confirm successful load balancer provisioning.
This procedure may cause an outage that can last several minutes due to new DNS records propagation, new load balancers provisioning, and other factors. IP addresses and canonical names of the Ingress Controller load balancer might change after applying this procedure.
Procedure
To update an IngressController
with manually specified load balancer subnets, you can follow these steps:
Modify the existing IngressController to update to the new subnets.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<name>
with a name for theIngressController
. - 2
- Replace
<domain>
with the DNS name serviced by theIngressController
. - 3
- Specify updated subnet IDs (or names if you using
names
). - 4
- You can also use the
networkLoadBalancer
field if using an NLB. - 5
- You can optionally specify a subnet by name using the
names
field instead of specifying the subnet by ID. - 6
- Update subnet IDs (or names if you are using
names
).
ImportantYou can specify a maximum of one subnet per availability zone. Only provide public subnets for external Ingress Controllers and private subnets for internal Ingress Controllers.
Examine the
Progressing
condition on theIngressController
for instructions on how to apply the subnet updates by running the following command:oc get ingresscontroller -n openshift-ingress-operator subnets -o jsonpath="{.status.conditions[?(@.type==\"Progressing\")]}" | yq -PC
$ oc get ingresscontroller -n openshift-ingress-operator subnets -o jsonpath="{.status.conditions[?(@.type==\"Progressing\")]}" | yq -PC
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
lastTransitionTime: "2024-11-25T20:19:31Z" message: 'One or more status conditions indicate progressing: LoadBalancerProgressing=True (OperandsProgressing: One or more managed resources are progressing: The IngressController subnets were changed from [...] to [...]. To effectuate this change, you must delete the service: `oc -n openshift-ingress delete svc/router-<name>`; the service load-balancer will then be deprovisioned and a new one created. This will most likely cause the new load-balancer to have a different host name and IP address and cause disruption. To return to the previous state, you can revert the change to the IngressController: [...]' reason: IngressControllerProgressing status: "True" type: Progressing
lastTransitionTime: "2024-11-25T20:19:31Z" message: 'One or more status conditions indicate progressing: LoadBalancerProgressing=True (OperandsProgressing: One or more managed resources are progressing: The IngressController subnets were changed from [...] to [...]. To effectuate this change, you must delete the service: `oc -n openshift-ingress delete svc/router-<name>`; the service load-balancer will then be deprovisioned and a new one created. This will most likely cause the new load-balancer to have a different host name and IP address and cause disruption. To return to the previous state, you can revert the change to the IngressController: [...]' reason: IngressControllerProgressing status: "True" type: Progressing
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To apply the update, delete the service associated with the Ingress controller by running the following command:
oc -n openshift-ingress delete svc/router-<name>
$ oc -n openshift-ingress delete svc/router-<name>
Verification
To confirm that the load balancer was provisioned successfully, check the
IngressController
conditions by running the following command:oc get ingresscontroller -n openshift-ingress-operator <name> -o jsonpath="{.status.conditions}" | yq -PC
$ oc get ingresscontroller -n openshift-ingress-operator <name> -o jsonpath="{.status.conditions}" | yq -PC
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.6.2.8. Configuring AWS Elastic IP (EIP) addresses for a Network Load Balancer (NLB) Copy linkLink copied to clipboard!
You can specify static IPs, otherwise known as elastic IPs, for your network load balancer (NLB) in the Ingress Controller. This is useful in situations where you want to configure appropriate firewall rules for your cluster network.
Prerequisites
- You must have an installed AWS cluster.
-
You must know the names or IDs of the subnets to which you intend to map your
IngressController
.
Procedure
Create a YAML file that contains the following content:
sample-ingress.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace the
<name>
placeholder with a name for the Ingress Controller. - 2
- Replace the
<domain>
placeholder with the DNS name serviced by the Ingress Controller. - 3
- The scope must be set to the value
External
and be Internet-facing in order to allocate EIPs. - 4
- Specify the IDs and names for your subnets. The total number of IDs and names must be equal to your allocated EIPs.
- 5
- Specify the EIP addresses.
ImportantYou can specify a maximum of one subnet per availability zone. Only provide public subnets for external Ingress Controllers. You can associate one EIP address per subnet.
Save and apply the CR file by entering the following command:
oc apply -f sample-ingress.yaml
$ oc apply -f sample-ingress.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Confirm the load balancer was provisioned successfully by checking the
IngressController
conditions by running the following command:oc get ingresscontroller -n openshift-ingress-operator <name> -o jsonpath="{.status.conditions}" | yq -PC
$ oc get ingresscontroller -n openshift-ingress-operator <name> -o jsonpath="{.status.conditions}" | yq -PC
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.7. Configuring ingress cluster traffic for a service external IP Copy linkLink copied to clipboard!
You can use either a MetalLB implementation or an IP failover deployment to attach an ExternalIP resource to a service so that the service is available to traffic outside your OpenShift Container Platform cluster. Hosting an external IP address in this way is only applicable for a cluster installed on bare-metal hardware.
You must ensure that you correctly configure the external network infrastructure to route traffic to the service.
2.7.1. Prerequisites Copy linkLink copied to clipboard!
Your cluster is configured with ExternalIPs enabled. For more information, read Configuring ExternalIPs for services.
NoteDo not use the same ExternalIP for the egress IP.
2.7.2. Attaching an ExternalIP to a service Copy linkLink copied to clipboard!
You can attach an ExternalIP resource to a service. If you configured your cluster to automatically attach the resource to a service, you might not need to manually attach an ExternalIP to the service.
The examples in the procedure use a scenario that manually attaches an ExternalIP resource to a service in a cluster with an IP failover configuration.
Procedure
Confirm compatible IP address ranges for the ExternalIP resource by entering the following command in your CLI:
oc get networks.config cluster -o jsonpath='{.spec.externalIP}{"\n"}'
$ oc get networks.config cluster -o jsonpath='{.spec.externalIP}{"\n"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf
autoAssignCIDRs
is set and you did not specify a value forspec.externalIPs
in the ExternalIP resource, OpenShift Container Platform automatically assigns ExternalIP to a newService
object.Choose one of the following options to attach an ExternalIP resource to the service:
If you are creating a new service, specify a value in the
spec.externalIPs
field and array of one or more valid IP addresses in theallowedCIDRs
parameter.Example of service YAML configuration file that supports an ExternalIP resource
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you are attaching an ExternalIP to an existing service, enter the following command. Replace
<name>
with the service name. Replace<ip_address>
with a valid ExternalIP address. You can provide multiple IP addresses separated by commas.Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc patch svc mysql-55-rhel7 -p '{"spec":{"externalIPs":["192.174.120.10"]}}'
$ oc patch svc mysql-55-rhel7 -p '{"spec":{"externalIPs":["192.174.120.10"]}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
"mysql-55-rhel7" patched
"mysql-55-rhel7" patched
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
To confirm that an ExternalIP address is attached to the service, enter the following command. If you specified an ExternalIP for a new service, you must create the service first.
oc get svc
$ oc get svc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE mysql-55-rhel7 172.30.131.89 192.174.120.10 3306/TCP 13m
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE mysql-55-rhel7 172.30.131.89 192.174.120.10 3306/TCP 13m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.8. Configuring ingress cluster traffic by using a NodePort Copy linkLink copied to clipboard!
OpenShift Container Platform provides methods for communicating from outside the cluster with services running in the cluster. This method uses a NodePort
.
2.8.1. Using a NodePort to get traffic into the cluster Copy linkLink copied to clipboard!
Use a NodePort
-type Service
resource to expose a service on a specific port on all nodes in the cluster. The port is specified in the Service
resource’s .spec.ports[*].nodePort
field.
Using a node port requires additional port resources.
A NodePort
exposes the service on a static port on the node’s IP address. NodePort
s are in the 30000
to 32767
range by default, which means a NodePort
is unlikely to match a service’s intended port. For example, port 8080
may be exposed as port 31020
on the node.
The administrator must ensure the external IP addresses are routed to the nodes.
NodePort
s and external IPs are independent and both can be used concurrently.
The procedures in this section require prerequisites performed by the cluster administrator.
2.8.2. Prerequisites Copy linkLink copied to clipboard!
Before starting the following procedures, the administrator must:
- Set up the external port to the cluster networking environment so that requests can reach the cluster.
Make sure there is at least one user with cluster admin role. To add this role to a user, run the following command:
oc adm policy add-cluster-role-to-user cluster-admin <user_name>
$ oc adm policy add-cluster-role-to-user cluster-admin <user_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Have an OpenShift Container Platform cluster with at least one master and at least one node and a system outside the cluster that has network access to the cluster. This procedure assumes that the external system is on the same subnet as the cluster. The additional networking required for external systems on a different subnet is out-of-scope for this topic.
2.8.3. Creating a project and service Copy linkLink copied to clipboard!
If the project and service that you want to expose does not exist, create the project and then create the service.
If the project and service already exists, skip to the procedure on exposing the service to create a route.
Prerequisites
-
Install the OpenShift CLI (
oc
) and log in as a cluster administrator.
Procedure
Create a new project for your service by running the
oc new-project
command:oc new-project <project_name>
$ oc new-project <project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
oc new-app
command to create your service:oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git
$ oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify that the service was created, run the following command:
oc get svc -n <project_name>
$ oc get svc -n <project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteBy default, the new service does not have an external IP address.
2.8.4. Exposing the service by creating a route Copy linkLink copied to clipboard!
You can expose the service as a route by using the oc expose
command.
Prerequisites
- You logged into OpenShift Container Platform.
Procedure
Log in to the project where the service you want to expose is located:
oc project <project_name>
$ oc project <project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To expose a node port for the application, modify the custom resource definition (CRD) of a service by entering the following command:
oc edit svc <service_name>
$ oc edit svc <service_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: To confirm the service is available with a node port exposed, enter the following command:
oc get svc -n myproject
$ oc get svc -n myproject
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.217.127 <none> 3306/TCP 9m44s nodejs-ex-ingress NodePort 172.30.107.72 <none> 3306:31345/TCP 39s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.217.127 <none> 3306/TCP 9m44s nodejs-ex-ingress NodePort 172.30.107.72 <none> 3306:31345/TCP 39s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: To remove the service created automatically by the
oc new-app
command, enter the following command:oc delete svc nodejs-ex
$ oc delete svc nodejs-ex
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To check that the service node port is updated with a port in the
30000-32767
range, enter the following command:oc get svc
$ oc get svc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the following example output, the updated port is
30327
:Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE httpd NodePort 172.xx.xx.xx <none> 8443:30327/TCP 109s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE httpd NodePort 172.xx.xx.xx <none> 8443:30327/TCP 109s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.9. Configuring ingress cluster traffic using load balancer allowed source ranges Copy linkLink copied to clipboard!
You can specify a list of IP address ranges for the IngressController
. This restricts access to the load balancer service when the endpointPublishingStrategy
is LoadBalancerService
.
2.9.1. Configuring load balancer allowed source ranges Copy linkLink copied to clipboard!
You can enable and configure the spec.endpointPublishingStrategy.loadBalancer.allowedSourceRanges
field. By configuring load balancer allowed source ranges, you can limit the access to the load balancer for the Ingress Controller to a specified list of IP address ranges. The Ingress Operator reconciles the load balancer Service and sets the spec.loadBalancerSourceRanges
field based on AllowedSourceRanges
.
If you have already set the spec.loadBalancerSourceRanges
field or the load balancer service anotation service.beta.kubernetes.io/load-balancer-source-ranges
in a previous version of OpenShift Container Platform, Ingress Controller starts reporting Progressing=True
after an upgrade. To fix this, set AllowedSourceRanges
that overwrites the spec.loadBalancerSourceRanges
field and clears the service.beta.kubernetes.io/load-balancer-source-ranges
annotation. Ingress Controller starts reporting Progressing=False
again.
Prerequisites
- You have a deployed Ingress Controller on a running cluster.
Procedure
Set the allowed source ranges API for the Ingress Controller by running the following command:
oc -n openshift-ingress-operator patch ingresscontroller/default \ --type=merge --patch='{"spec":{"endpointPublishingStrategy": \ {"type":"LoadBalancerService", "loadbalancer": \ {"scope":"External", "allowedSourceRanges":["0.0.0.0/0"]}}}}'
$ oc -n openshift-ingress-operator patch ingresscontroller/default \ --type=merge --patch='{"spec":{"endpointPublishingStrategy": \ {"type":"LoadBalancerService", "loadbalancer": \ {"scope":"External", "allowedSourceRanges":["0.0.0.0/0"]}}}}'
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The example value
0.0.0.0/0
specifies the allowed source range.
2.9.2. Migrating to load balancer allowed source ranges Copy linkLink copied to clipboard!
If you have already set the annotation service.beta.kubernetes.io/load-balancer-source-ranges
, you can migrate to load balancer allowed source ranges. When you set the AllowedSourceRanges
, the Ingress Controller sets the spec.loadBalancerSourceRanges
field based on the AllowedSourceRanges
value and unsets the service.beta.kubernetes.io/load-balancer-source-ranges
annotation.
If you have already set the spec.loadBalancerSourceRanges
field or the load balancer service anotation service.beta.kubernetes.io/load-balancer-source-ranges
in a previous version of OpenShift Container Platform, the Ingress Controller starts reporting Progressing=True
after an upgrade. To fix this, set AllowedSourceRanges
that overwrites the spec.loadBalancerSourceRanges
field and clears the service.beta.kubernetes.io/load-balancer-source-ranges
annotation. The Ingress Controller starts reporting Progressing=False
again.
Prerequisites
-
You have set the
service.beta.kubernetes.io/load-balancer-source-ranges
annotation.
Procedure
Ensure that the
service.beta.kubernetes.io/load-balancer-source-ranges
is set:oc get svc router-default -n openshift-ingress -o yaml
$ oc get svc router-default -n openshift-ingress -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
apiVersion: v1 kind: Service metadata: annotations: service.beta.kubernetes.io/load-balancer-source-ranges: 192.168.0.1/32
apiVersion: v1 kind: Service metadata: annotations: service.beta.kubernetes.io/load-balancer-source-ranges: 192.168.0.1/32
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the
spec.loadBalancerSourceRanges
field is unset:oc get svc router-default -n openshift-ingress -o yaml
$ oc get svc router-default -n openshift-ingress -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
... spec: loadBalancerSourceRanges: - 0.0.0.0/0 ...
... spec: loadBalancerSourceRanges: - 0.0.0.0/0 ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Update your cluster to OpenShift Container Platform 4.19.
Set the allowed source ranges API for the
ingresscontroller
by running the following command:oc -n openshift-ingress-operator patch ingresscontroller/default \ --type=merge --patch='{"spec":{"endpointPublishingStrategy": \ {"loadBalancer":{"allowedSourceRanges":["0.0.0.0/0"]}}}}'
$ oc -n openshift-ingress-operator patch ingresscontroller/default \ --type=merge --patch='{"spec":{"endpointPublishingStrategy": \ {"loadBalancer":{"allowedSourceRanges":["0.0.0.0/0"]}}}}'
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The example value
0.0.0.0/0
specifies the allowed source range.
2.10. Patching existing ingress objects Copy linkLink copied to clipboard!
You can update or modify the following fields of existing Ingress
objects without recreating the objects or disrupting services to them:
- Specifications
- Host
- Path
- Backend services
- SSL/TLS settings
- Annotations
2.10.1. Patching Ingress objects to resolve an ingressWithoutClassName alert Copy linkLink copied to clipboard!
The ingressClassName
field specifies the name of the IngressClass
object. You must define the ingressClassName
field for each Ingress
object.
If you have not defined the ingressClassName
field for an Ingress
object, you could experience routing issues. After 24 hours, you will receive an ingressWithoutClassName
alert to remind you to set the ingressClassName
field.
Procedure
Patch the Ingress
objects with a completed ingressClassName
field to ensure proper routing and functionality.
List all
IngressClass
objects:oc get ingressclass
$ oc get ingressclass
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List all
Ingress
objects in all namespaces:oc get ingress -A
$ oc get ingress -A
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Patch the
Ingress
object:oc patch ingress/<ingress_name> --type=merge --patch '{"spec":{"ingressClassName":"openshift-default"}}'
$ oc patch ingress/<ingress_name> --type=merge --patch '{"spec":{"ingressClassName":"openshift-default"}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<ingress_name>
with the name of theIngress
object. This command patches theIngress
object to include the desired ingress class name.
2.11. Allocating Load Balancers to Specific Subnets Copy linkLink copied to clipboard!
You can manage application traffic efficiently by allocating load balancers. Network administrators can allocate load balancers to customize deployments which can ensure optimal traffic distribution, high availability of applications, uninterrupted service, and network segmentation.
2.11.1. Allocating API and Ingress Load Balancers to Specific Subnets on AWS Copy linkLink copied to clipboard!
You can control the network placement of OpenShift Load Balancers on AWS, including those for the Ingress Controller, by explicitly defining your virtual private cloud’s (VPC’s) subnets and assigning them specific roles directly within the platform.aws.vpc.subnets
section of the install-config.yaml
file. This method provides granular control over which subnets are used for resources, such as the Ingress Controller and other cluster components.
2.11.1.1. Specifying AWS subnets for OpenShift API and ingress load balancers at installation Copy linkLink copied to clipboard!
Perform the following steps to allocate API and ingress load balancers to specific subnets.
Prerequisites
Before you begin, ensure you have:
- An existing AWS virtual private cloud (VPC).
Pre-configured AWS subnets intended for use by the OpenShift cluster, with the following considerations:
-
You have a list of their subnet IDs (for example,
subnet-0123456789abcdef0
). These IDs will be used in theinstall-config.yaml
file. - Use subnets spanning at least two availability zones (AZs) for high availability of load balancers and other critical components, like control planes.
- You have sufficient available IP addresses within these subnets for all assigned roles.
- The AWS configuration for these subnets, including network ACLs and security groups, must permit necessary traffic for all roles assigned to them. For subnets hosting an ingress controller, this typically includes TCP ports 80 and 443 from required sources.
-
You have a list of their subnet IDs (for example,
- You have the OpenShift installer binary for your target OpenShift version.
-
You have an
install-config.yaml
file.
Procedure
Prepare the
install-config.yaml
file:If you haven’t already, generate the installation configuration file using the OpenShift installer:
openshift-install create install-config --dir=<your_installation_directory>
$ openshift-install create install-config --dir=<your_installation_directory>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command creates the
install-config.yaml
file in the specified directory.Define subnets and assign roles:
Open the
install-config.yaml
file located in<your_installation_directory>
using a text editor. You will define your VPC subnets and their designated roles under theplatform.aws.vpc.subnets
field.For each AWS subnet you intend the cluster to use, you will create an entry specifying its
id
and a list ofroles
. Each role is an object with atype
key. To designate a subnet for the default Ingress Controller, assign it a role withtype: IngressControllerLB
.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Your base domain.
- 2
- Your AWS region.
- 3
- The vpc object under
platform.aws
contains the subnets list. - 4
- List of all subnet objects that OpenShift will use. Each object defines a subnet id and its roles.
- 5
- Replace with your AWS Subnet ID.
- 6
- The
type: IngressControllerLB
role specifically designates this subnet for the default Ingress Controller’s LoadBalancer. In private/internal cluster, the subnet withIngressControllerLB
role must be private. - 7
- The
type: ClusterNode
role designates this subnet for control plane and compute nodes. These are typically private subnets. - 8
- Your pull secret.
- 9
- Your SSH key.
Entries for control plane load balancers in the
subnets
list would follow a similar pattern:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For the default public Ingress Controller, any subnet assigned the
IngressControllerLB
role in yourinstall-config.yaml
file must be a public subnet. For example, it must have a route table entry in AWS that directs outbound traffic to an internet gateway (IGW).Ensure you list all necessary subnets, public and private across the AZs, and assign them appropriate roles according to your cluster architecture.
Subnet IDs define the subnets in an existing VPC and can optionally specify their intended roles. If no roles are specified on any subnet, the subnet roles are decided automatically. In this case, the VPC must not contain any other non-cluster subnets without the
kubernetes.io/cluster/<cluster-id>
tag.If roles are specified for subnets, each subnet must have at least one assigned role, and the
ClusterNode
,BootstrapNode
,IngressControllerLB
,ControlPlaneExternalLB
, andControlPlaneInternalLB
roles must be assigned to at least one subnet. However, if the cluster scope is internal,ControlPlaneExternalLB
is not required.Proceed with the cluster Installation:
After saving your changes to the
install-config.yaml
file, create the cluster:openshift-install create cluster --dir=<your_installation_directory>
$ openshift-install create cluster --dir=<your_installation_directory>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The installation program will now use the subnet definitions and explicit role assignments from the
platform.aws.vpc.subnets
section of yourinstall-config.yaml
file to provision cluster resources, including placing the Ingress Controller’s LoadBalancer in the subnets you designated with theIngressControllerLB
role.
The role assignment mechanism within platform.aws.vpc.subnets
, such as specifying types like IngressControllerLB
, ClusterNode
, ControlPlaneExternalLB
, ControlPlaneInternalLB
, BootstrapNode
is the comprehensive way the OpenShift installer identifies suitable subnets for various cluster services and components.
2.12. Configuring an Ingress Controller for manual DNS Management Copy linkLink copied to clipboard!
As a cluster administrator, when you create an Ingress Controller, the Operator manages the DNS records automatically. This has some limitations when the required DNS zone is different from the cluster DNS zone or when the DNS zone is hosted outside the cloud provider.
As a cluster administrator, you can configure an Ingress Controller to stop automatic DNS management and start manual DNS management. Set dnsManagementPolicy
to specify when it should be automatically or manually managed.
When you change an Ingress Controller from Managed
to Unmanaged
DNS management policy, the Operator does not clean up the previous wildcard DNS record provisioned on the cloud. When you change an Ingress Controller from Unmanaged
to Managed
DNS management policy, the Operator attempts to create the DNS record on the cloud provider if it does not exist or updates the DNS record if it already exists.
When you set dnsManagementPolicy
to unmanaged
, you have to manually manage the lifecycle of the wildcard DNS record on the cloud provider.
2.12.1. Managed DNS management policy Copy linkLink copied to clipboard!
The Managed
DNS management policy for Ingress Controllers ensures that the lifecycle of the wildcard DNS record on the cloud provider is automatically managed by the Operator.
2.12.2. Unmanaged DNS management policy Copy linkLink copied to clipboard!
The Unmanaged
DNS management policy for Ingress Controllers ensures that the lifecycle of the wildcard DNS record on the cloud provider is not automatically managed, instead it becomes the responsibility of the cluster administrator.
On the AWS cloud platform, if the domain on the Ingress Controller does not match with dnsConfig.Spec.BaseDomain
then the DNS management policy is automatically set to Unmanaged
.
2.12.3. Creating a custom Ingress Controller with the Unmanaged DNS management policy Copy linkLink copied to clipboard!
As a cluster administrator, you can create a new custom Ingress Controller with the Unmanaged
DNS management policy.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in as a user with
cluster-admin
privileges.
Procedure
Create a custom resource (CR) file named
sample-ingress.yaml
containing the following:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the
<name>
with a name for theIngressController
object. - 2
- Specify the
domain
based on the DNS record that was created as a prerequisite. - 3
- Specify the
scope
asExternal
to expose the load balancer externally. - 4
dnsManagementPolicy
indicates if the Ingress Controller is managing the lifecycle of the wildcard DNS record associated with the load balancer. The valid values areManaged
andUnmanaged
. The default value isManaged
.
Save the file to apply the changes.
oc apply -f <name>.yaml
oc apply -f <name>.yaml
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.12.4. Modifying an existing Ingress Controller Copy linkLink copied to clipboard!
As a cluster administrator, you can modify an existing Ingress Controller to manually manage the DNS record lifecycle.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in as a user with
cluster-admin
privileges.
Procedure
Modify the chosen
IngressController
to setdnsManagementPolicy
:SCOPE=$(oc -n openshift-ingress-operator get ingresscontroller <name> -o=jsonpath="{.status.endpointPublishingStrategy.loadBalancer.scope}") oc -n openshift-ingress-operator patch ingresscontrollers/<name> --type=merge --patch='{"spec":{"endpointPublishingStrategy":{"type":"LoadBalancerService","loadBalancer":{"dnsManagementPolicy":"Unmanaged", "scope":"${SCOPE}"}}}}'
SCOPE=$(oc -n openshift-ingress-operator get ingresscontroller <name> -o=jsonpath="{.status.endpointPublishingStrategy.loadBalancer.scope}") oc -n openshift-ingress-operator patch ingresscontrollers/<name> --type=merge --patch='{"spec":{"endpointPublishingStrategy":{"type":"LoadBalancerService","loadBalancer":{"dnsManagementPolicy":"Unmanaged", "scope":"${SCOPE}"}}}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Optional: You can delete the associated DNS record in the cloud provider.
2.13. Gateway API with OpenShift Container Platform Networking Copy linkLink copied to clipboard!
OpenShift Container Platform provides additional ways of configuring network traffic by using Gateway API with the Ingress Operator.
Gateway API does not support user-defined networks (UDN).
2.13.1. Overview of Gateway API Copy linkLink copied to clipboard!
Gateway API is an open source, community-managed, Kubernetes networking mechanism. It focuses on routing within the transport layer, L4, and the application layer, L7, for clusters. A variety of vendors offer many implementations of Gateway API.
The project is an effort to provide a standardized ecosystem by using a portable API with broad community support. By integrating Gateway API functionality into the Ingress Operator, it enables a networking solution that aligns with existing community and upstream development efforts.
Gateway API extends the functionality of the Ingress Operator to handle more granular cluster traffic and routing configurations. With these capabilities, you can create instances of Gateway APIs custom resource definitions (CRDs). For OpenShift Container Platform clusters, the Ingress Operator creates the following resources:
- Gateway
- This resource describes how traffic can be translated to services within the cluster. For example, a specific load balancer configuration.
- GatewayClass
-
This resource defines a set of
Gateway
objects that share a common configuration and behavior. For example, two separateGatewayClass
objects might be created to distinguish a set ofGateway
resources used for public or private applications. - HTTPRoute
- This resource specifies the routing behavior of HTTP requests from a Gateway to a service, and is especially useful for multiplexing HTTP or terminated HTTPS connections.
- GRPCRoute
- This resource specifies the routing behavior of gRPC requests.
- ReferenceGrant
- This resource enables cross-namespace references. For example, it enables routes to forward traffic to backends that are in a different namespace.
In OpenShift Container Platform, the implementation of Gateway API is based on gateway.networking.k8s.io/v1
, and all fields in this version are supported.
2.13.1.1. Benefits of Gateway API Copy linkLink copied to clipboard!
Gateway API provides the following benefits:
-
Portability: While OpenShift Container Platform uses HAProxy to improve Ingress performance, Gateway API does not rely on vendor-specific annotations to provide certain behavior. To get comparable performance as HAProxy, the
Gateway
objects need to be horizontally scaled or their associated nodes need to be vertically scaled. -
Separation of concerns: Gateway API uses a role-based approach to its resources, and more neatly fits into how a large organization structures its responsibilities and teams. Platform engineers might focus on
GatewayClass
resources, cluster admins might focus on configuringGateway
resources, and application developers might focus on routing their services withHTTPRoute
resources. - Extensibility: Additional functionality is developed as a standardized CRD.
2.13.1.2. Limitations of Gateway API Copy linkLink copied to clipboard!
Gateway API has the following limitations:
- Version incompatibilities: Gateway API ecosystem changes rapidly, and some implementations do not work with others because their featureset is based on differing versions of Gateway API.
- Resource overhead: While more flexible, Gateway API uses multiple resource types to achieve an outcome. For smaller applications, the simplicity of traditional Ingress might be a better fit.
2.13.2. Gateway API implementation for OpenShift Container Platform Copy linkLink copied to clipboard!
The Ingress Operator manages the lifecycle of Gateway API CRDs in a way that enables other vendor implementations to make use of CRDs defined in an OpenShift Container Platform cluster.
In some situations, Gateway API provides one or more fields that a vendor implementation does not support, but that implementation is otherwise compatible in schema with the rest of the fields. These "dead fields" can result in disrupted Ingress workloads, improperly provisioned applications and services, and security related issues. Because OpenShift Container Platform uses a specific version of Gateway API CRDs, any use of third-party implementations of Gateway API must conform to the OpenShift Container Platform implementation to ensure that all fields work as expected.
Any CRDs created within an OpenShift Container Platform 4.19 cluster are compatibly versioned and maintained by the Ingress Operator. If CRDs are already present but were not previously managed by the Ingress Operator, the Ingress Operator checks whether these configurations are compatible with Gateway API version supported by OpenShift Container Platform, and creates an admin-gate that requires your acknowledgment of CRD succession.
If you are updating your cluster from a previous OpenShift Container Platform version that contains Gateway API CRDs change those resources so that they exactly match the version supported by OpenShift Container Platform. Otherwise, you cannot update your cluster because those CRDs were not managed by OpenShift Container Platform, and could contain functionality that is unsupported by Red Hat.
2.13.3. Getting started with Gateway API for the Ingress Operator Copy linkLink copied to clipboard!
When you create a GatewayClass as shown in the first step, it configures Gateway API for use on your cluster.
Procedure
Create a
GatewayClass
object:Create a YAML file,
openshift-default.yaml
, that contains the following information:Example
GatewayClass
CRCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThe controller name must be exactly as shown for the Ingress Operator to manage it. If you set this field to anything else, the Ingress Operator ignores the
GatewayClass
object and all associatedGateway
,GRPCRoute
, andHTTPRoute
objects. The controller name is tied to the implementation of Gateway API in OpenShift Container Platform, andopenshift.io/gateway-controller/v1
is the only controller name allowed.Run the following command to create the
GatewayClass
resource:oc create -f openshift-default.yaml
$ oc create -f openshift-default.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
gatewayclass.gateway.networking.k8s.io/openshift-default created
gatewayclass.gateway.networking.k8s.io/openshift-default created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow During the creation of the
GatewayClass
resource, the Ingress Operator installs a lightweight version of Red Hat OpenShift Service Mesh, an Istio custom resource, and a new deployment in theopenshift-ingress
namespace.Optional: Verify that the new deployment,
istiod-openshift-gateway
is ready and available:oc get deployment -n openshift-ingress
$ oc get deployment -n openshift-ingress
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY UP-TO-DATE AVAILABLE AGE istiod-openshift-gateway 1/1 1 1 55s router-default 2/2 2 2 6h4m
NAME READY UP-TO-DATE AVAILABLE AGE istiod-openshift-gateway 1/1 1 1 55s router-default 2/2 2 2 6h4m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a secret by running the following command:
oc -n openshift-ingress create secret tls gwapi-wildcard --cert=wildcard.crt --key=wildcard.key
$ oc -n openshift-ingress create secret tls gwapi-wildcard --cert=wildcard.crt --key=wildcard.key
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the domain of the Ingress Operator by running the following command:
DOMAIN=$(oc get ingresses.config/cluster -o jsonpath={.spec.domain})
$ DOMAIN=$(oc get ingresses.config/cluster -o jsonpath={.spec.domain})
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
Gateway
object:Create a YAML file,
example-gateway.yaml
, that contains the following information:Example
Gateway
CRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
Gateway
object must be created in theopenshift-ingress
namespace. - 2
- The
Gateway
object must reference the name of the previously createdGatewayClass
object. - 3
- The HTTPS listener listens for HTTPS requests that match a subdomain of the cluster domain. You use this listener to configure ingress to your applications by using Gateway API
HTTPRoute
resources. - 4
- The hostname must be a subdomain of the Ingress Operator domain. If you use a domain, the listener tries to serve all traffic in that domain.
- 5
- The name of the previously created secret.
Apply the resource by running the following command:
oc apply -f example-gateway.yaml
$ oc apply -f example-gateway.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: When you create a
Gateway
object, Red Hat OpenShift Service Mesh automatically provisions a deployment and service with the same name. Verify this by running the following commands:To verify the deployment, run the following command:
oc get deployment -n openshift-ingress example-gateway-openshift-default
$ oc get deployment -n openshift-ingress example-gateway-openshift-default
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY UP-TO-DATE AVAILABLE AGE example-gateway-openshift-default 1/1 1 1 25s
NAME READY UP-TO-DATE AVAILABLE AGE example-gateway-openshift-default 1/1 1 1 25s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify the service, run the following command:
oc get service -n openshift-ingress example-gateway-openshift-default
$ oc get service -n openshift-ingress example-gateway-openshift-default
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-gateway-openshift-default LoadBalancer 10.1.2.3 <external_ipname> <port_info> 47s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-gateway-openshift-default LoadBalancer 10.1.2.3 <external_ipname> <port_info> 47s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Optional: The Ingress Operator automatically creates a
DNSRecord
CR using the hostname from the listeners, and adds the labelgateway.networking.k8s.io/gateway-name=example-gateway
. Verify the status of the DNS record by running the following command:oc -n openshift-ingress get dnsrecord -l gateway.networking.k8s.io/gateway-name=example-gateway -o yaml
$ oc -n openshift-ingress get dnsrecord -l gateway.networking.k8s.io/gateway-name=example-gateway -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create an
HTTPRoute
resource that directs requests to your already-created namespace and application calledexample-app/example-app
:Create a YAML file,
example-route.yaml
, that contains the following information:Example
HTTPRoute
CRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The namespace you are deploying your application.
- 2
- This field must point to the
Gateway
object you previously configured. - 3
- The hostname must match the one specified in the
Gateway
object. In this case, the listeners use a wildcard hostname. - 4
- This field specifies the backend references that point to your service.
- 5
- The name of the
Service
for your application.
Apply the resource by running the following command:
oc apply -f example-route.yaml
$ oc apply -f example-route.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
httproute.gateway.networking.k8s.io/example-route created
httproute.gateway.networking.k8s.io/example-route created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the
Gateway
object is deployed and has the conditionProgrammed
by running the following command:oc wait -n openshift-ingress --for=condition=Programmed gateways.gateway.networking.k8s.io example-gateway
$ oc wait -n openshift-ingress --for=condition=Programmed gateways.gateway.networking.k8s.io example-gateway
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
gateway.gateway.networking.k8s.io/example-gateway condition met
gateway.gateway.networking.k8s.io/example-gateway condition met
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Send a request to the configured
HTTPRoute
object hostname:curl -I --cacert <local cert file> https://example.gwapi.${DOMAIN}:443
$ curl -I --cacert <local cert file> https://example.gwapi.${DOMAIN}:443
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.13.4. Gateway API deployment topologies Copy linkLink copied to clipboard!
Gateway API is designed to accomodate two topologies: shared gateways or dedicated gateways. Each topology has its own advantages and different security implications.
- Dedicated gateway
-
Routes and any load balancers or proxies are served from the same namespace. The
Gateway
object restricts routes to a particular application namespace. This is the default topology when deploying a Gateway API resource in OpenShift Container Platform. - Shared gateway
-
Routes are served from multiple namespaces or with multiple hostnames. The
Gateway
object filters allow routes from application namespaces by using thespec.listeners.allowedRoutes.namespaces
field.
2.13.4.1. Dedicated gateway example Copy linkLink copied to clipboard!
The following example shows a dedicated Gateway
resource, fin-gateway
:
Example dedicated Gateway
resource
- 1
- Creating a
Gateway
resource without settingspec.listeners[].allowedRoutes
results in implicitly setting thenamespaces.from
field to have the valueSame
.
The following example shows the associated HTTPRoute
resource, sales-db
, which attaches to the dedicated Gateway
object:
Example HTTPRoute
resource
The HTTPRoute
resource must have the name of the Gateway
object as the value for its parentRefs
field in order to attach to the gateway. Implicitly, the route is assumed to be in the same namespace as the Gateway
object.
Chapter 3. Load balancing on RHOSP Copy linkLink copied to clipboard!
3.1. Limitations of load balancer services Copy linkLink copied to clipboard!
OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) use Octavia to handle load balancer services. As a result of this choice, such clusters have a number of functional limitations.
RHOSP Octavia has two supported providers: Amphora and OVN. These providers differ in terms of available features as well as implementation details. These distinctions affect load balancer services that are created on your cluster.
3.1.1. Local external traffic policies Copy linkLink copied to clipboard!
You can set the external traffic policy (ETP) parameter, .spec.externalTrafficPolicy
, on a load balancer service to preserve the source IP address of incoming traffic when it reaches service endpoint pods. However, if your cluster uses the Amphora Octavia provider, the source IP of the traffic is replaced with the IP address of the Amphora VM. This behavior does not occur if your cluster uses the OVN Octavia provider.
Having the ETP
option set to Local
requires that health monitors be created for the load balancer. Without health monitors, traffic can be routed to a node that does not have a functional endpoint, which causes the connection to drop. To force Cloud Provider OpenStack to create health monitors, you must set the value of the create-monitor
option in the cloud provider configuration to true
.
In RHOSP 16.2, the OVN Octavia provider does not support health monitors. Therefore, setting the ETP to local is unsupported.
In RHOSP 16.2, the Amphora Octavia provider does not support HTTP monitors on UDP pools. As a result, UDP load balancer services have UDP-CONNECT
monitors created instead. Due to implementation details, this configuration only functions properly with the OVN-Kubernetes CNI plugin.
3.2. Scaling clusters for application traffic by using Octavia Copy linkLink copied to clipboard!
OpenShift Container Platform clusters that run on Red Hat OpenStack Platform (RHOSP) can use the Octavia load balancing service to distribute traffic across multiple virtual machines (VMs) or floating IP addresses. This feature mitigates the bottleneck that single machines or addresses create.
You must create your own Octavia load balancer to use it for application network scaling.
3.2.1. Scaling clusters by using Octavia Copy linkLink copied to clipboard!
If you want to use multiple API load balancers, create an Octavia load balancer and then configure your cluster to use it.
Prerequisites
- Octavia is available on your Red Hat OpenStack Platform (RHOSP) deployment.
Procedure
From a command line, create an Octavia load balancer that uses the Amphora driver:
openstack loadbalancer create --name API_OCP_CLUSTER --vip-subnet-id <id_of_worker_vms_subnet>
$ openstack loadbalancer create --name API_OCP_CLUSTER --vip-subnet-id <id_of_worker_vms_subnet>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can use a name of your choice instead of
API_OCP_CLUSTER
.After the load balancer becomes active, create listeners:
openstack loadbalancer listener create --name API_OCP_CLUSTER_6443 --protocol HTTPS--protocol-port 6443 API_OCP_CLUSTER
$ openstack loadbalancer listener create --name API_OCP_CLUSTER_6443 --protocol HTTPS--protocol-port 6443 API_OCP_CLUSTER
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteTo view the status of the load balancer, enter
openstack loadbalancer list
.Create a pool that uses the round robin algorithm and has session persistence enabled:
openstack loadbalancer pool create --name API_OCP_CLUSTER_pool_6443 --lb-algorithm ROUND_ROBIN --session-persistence type=<source_IP_address> --listener API_OCP_CLUSTER_6443 --protocol HTTPS
$ openstack loadbalancer pool create --name API_OCP_CLUSTER_pool_6443 --lb-algorithm ROUND_ROBIN --session-persistence type=<source_IP_address> --listener API_OCP_CLUSTER_6443 --protocol HTTPS
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To ensure that control plane machines are available, create a health monitor:
openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type TCP API_OCP_CLUSTER_pool_6443
$ openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type TCP API_OCP_CLUSTER_pool_6443
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the control plane machines as members of the load balancer pool:
for SERVER in $(MASTER-0-IP MASTER-1-IP MASTER-2-IP) do openstack loadbalancer member create --address $SERVER --protocol-port 6443 API_OCP_CLUSTER_pool_6443 done
$ for SERVER in $(MASTER-0-IP MASTER-1-IP MASTER-2-IP) do openstack loadbalancer member create --address $SERVER --protocol-port 6443 API_OCP_CLUSTER_pool_6443 done
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: To reuse the cluster API floating IP address, unset it:
openstack floating ip unset $API_FIP
$ openstack floating ip unset $API_FIP
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add either the unset
API_FIP
or a new address to the created load balancer VIP:openstack floating ip set --port $(openstack loadbalancer show -c <vip_port_id> -f value API_OCP_CLUSTER) $API_FIP
$ openstack floating ip set --port $(openstack loadbalancer show -c <vip_port_id> -f value API_OCP_CLUSTER) $API_FIP
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Your cluster now uses Octavia for load balancing.
3.3. Services for a user-managed load balancer Copy linkLink copied to clipboard!
You can configure an OpenShift Container Platform cluster on Red Hat OpenStack Platform (RHOSP) to use a user-managed load balancer in place of the default load balancer.
Configuring a user-managed load balancer depends on your vendor’s load balancer.
The information and examples in this section are for guideline purposes only. Consult the vendor documentation for more specific information about the vendor’s load balancer.
Red Hat supports the following services for a user-managed load balancer:
- Ingress Controller
- OpenShift API
- OpenShift MachineConfig API
You can choose whether you want to configure one or all of these services for a user-managed load balancer. Configuring only the Ingress Controller service is a common configuration option. To better understand each service, view the following diagrams:
Figure 3.1. Example network workflow that shows an Ingress Controller operating in an OpenShift Container Platform environment
Figure 3.2. Example network workflow that shows an OpenShift API operating in an OpenShift Container Platform environment
Figure 3.3. Example network workflow that shows an OpenShift MachineConfig API operating in an OpenShift Container Platform environment
The following configuration options are supported for user-managed load balancers:
- Use a node selector to map the Ingress Controller to a specific set of nodes. You must assign a static IP address to each node in this set, or configure each node to receive the same IP address from the Dynamic Host Configuration Protocol (DHCP). Infrastructure nodes commonly receive this type of configuration.
Target all IP addresses on a subnet. This configuration can reduce maintenance overhead, because you can create and destroy nodes within those networks without reconfiguring the load balancer targets. If you deploy your ingress pods by using a machine set on a smaller network, such as a
/27
or/28
, you can simplify your load balancer targets.TipYou can list all IP addresses that exist in a network by checking the machine config pool’s resources.
Before you configure a user-managed load balancer for your OpenShift Container Platform cluster, consider the following information:
- For a front-end IP address, you can use the same IP address for the front-end IP address, the Ingress Controller’s load balancer, and API load balancer. Check the vendor’s documentation for this capability.
For a back-end IP address, ensure that an IP address for an OpenShift Container Platform control plane node does not change during the lifetime of the user-managed load balancer. You can achieve this by completing one of the following actions:
- Assign a static IP address to each control plane node.
- Configure each node to receive the same IP address from the DHCP every time the node requests a DHCP lease. Depending on the vendor, the DHCP lease might be in the form of an IP reservation or a static DHCP assignment.
- Manually define each node that runs the Ingress Controller in the user-managed load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur.
3.3.1. Configuring a user-managed load balancer Copy linkLink copied to clipboard!
You can configure an OpenShift Container Platform cluster on Red Hat OpenStack Platform (RHOSP) to use a user-managed load balancer in place of the default load balancer.
Before you configure a user-managed load balancer, ensure that you read the "Services for a user-managed load balancer" section.
Read the following prerequisites that apply to the service that you want to configure for your user-managed load balancer.
MetalLB, which runs on a cluster, functions as a user-managed load balancer.
OpenShift API prerequisites
- You defined a front-end IP address.
TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items:
- Port 6443 provides access to the OpenShift API service.
- Port 22623 can provide ignition startup configurations to nodes.
- The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster.
- The front-end IP address and port 22623 are reachable only by OpenShift Container Platform nodes.
- The load balancer backend can communicate with OpenShift Container Platform control plane nodes on port 6443 and 22623.
Ingress Controller prerequisites
- You defined a front-end IP address.
- TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer.
- The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your OpenShift Container Platform cluster.
- The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your OpenShift Container Platform cluster.
- The load balancer backend can communicate with OpenShift Container Platform nodes that run the Ingress Controller on ports 80, 443, and 1936.
Prerequisite for health check URL specifications
You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OpenShift Container Platform provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services.
The following examples show health check specifications for the previously listed backend services:
Example of a Kubernetes API health check specification
Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10
Path: HTTPS:6443/readyz
Healthy threshold: 2
Unhealthy threshold: 2
Timeout: 10
Interval: 10
Example of a Machine Config API health check specification
Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10
Path: HTTPS:22623/healthz
Healthy threshold: 2
Unhealthy threshold: 2
Timeout: 10
Interval: 10
Example of an Ingress Controller health check specification
Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10
Path: HTTP:1936/healthz/ready
Healthy threshold: 2
Unhealthy threshold: 2
Timeout: 5
Interval: 10
Procedure
Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 22623, 443, and 80. Depending on your needs, you can specify the IP address of a single subnet or IP addresses from multiple subnets in your HAProxy configuration.
Example HAProxy configuration with one listed subnet
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example HAProxy configuration with multiple listed subnets
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
curl
CLI command to verify that the user-managed load balancer and its resources are operational:Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response:
curl https://<loadbalancer_ip_address>:6443/version --insecure
$ curl https://<loadbalancer_ip_address>:6443/version --insecure
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the configuration is correct, you receive a JSON object in response:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output:
curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure
$ curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the configuration is correct, the output from the command shows the following response:
HTTP/1.1 200 OK Content-Length: 0
HTTP/1.1 200 OK Content-Length: 0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output:
curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address>
$ curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the configuration is correct, the output from the command shows the following response:
HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache
HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output:
curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>
$ curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the configuration is correct, the output from the command shows the following response:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Configure the DNS records for your cluster to target the front-end IP addresses of the user-managed load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer.
Examples of modified DNS records
<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End
<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End
Copy to Clipboard Copied! Toggle word wrap Toggle overflow <load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End
<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantDNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record.
For your OpenShift Container Platform cluster to use the user-managed load balancer, you must specify the following configuration in your cluster’s
install-config.yaml
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set
UserManaged
for thetype
parameter to specify a user-managed load balancer for your cluster. The parameter defaults toOpenShiftManagedDefault
, which denotes the default internal load balancer. For services defined in anopenshift-kni-infra
namespace, a user-managed load balancer can deploy thecoredns
service to pods in your cluster but ignoreskeepalived
andhaproxy
services. - 2
- Required parameter when you specify a user-managed load balancer. Specify the user-managed load balancer’s public IP address, so that the Kubernetes API can communicate with the user-managed load balancer.
- 3
- Required parameter when you specify a user-managed load balancer. Specify the user-managed load balancer’s public IP address, so that the user-managed load balancer can manage ingress traffic for your cluster.
Verification
Use the
curl
CLI command to verify that the user-managed load balancer and DNS record configuration are operational:Verify that you can access the cluster API, by running the following command and observing the output:
curl https://api.<cluster_name>.<base_domain>:6443/version --insecure
$ curl https://api.<cluster_name>.<base_domain>:6443/version --insecure
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the configuration is correct, you receive a JSON object in response:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that you can access the cluster machine configuration, by running the following command and observing the output:
curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure
$ curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the configuration is correct, the output from the command shows the following response:
HTTP/1.1 200 OK Content-Length: 0
HTTP/1.1 200 OK Content-Length: 0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that you can access each cluster application on port, by running the following command and observing the output:
curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure
$ curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the configuration is correct, the output from the command shows the following response:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that you can access each cluster application on port 443, by running the following command and observing the output:
curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure
$ curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the configuration is correct, the output from the command shows the following response:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4. Specifying a floating IP address in the Ingress Controller Copy linkLink copied to clipboard!
By default, a floating IP address gets randomly assigned to your OpenShift Container Platform cluster on Red Hat OpenStack Platform (RHOSP) upon deployment. This floating IP address is associated with your Ingress port.
You might want to pre-create a floating IP address before updating your DNS records and cluster deployment. In this situation, you can define a floating IP address to the Ingress Controller. You can do this regardless of whether you are using Octavia or a user-managed cluster.
Procedure
Create the Ingress Controller custom resource (CR) file with the floating IPs:
Example Ingress config
sample-ingress.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of your Ingress Controller. If you are using the default Ingress Controller, the value for this field is
default
. - 2
- The DNS name serviced by the Ingress Controller.
- 3
- You must set the scope to
External
to use a floating IP address. - 4
- The floating IP address associated with the port your Ingress Controller is listening on.
Apply the CR file by running the following command:
oc apply -f sample-ingress.yaml
$ oc apply -f sample-ingress.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update your DNS records with the Ingress Controller endpoint:
*.apps.<name>.<domain>. IN A <ingress_port_IP>
*.apps.<name>.<domain>. IN A <ingress_port_IP>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Continue with creating your OpenShift Container Platform cluster.
Verification
Confirm that the load balancer was successfully provisioned by checking the
IngressController
conditions using the following command:oc get ingresscontroller -n openshift-ingress-operator <name> -o jsonpath="{.status.conditions}" | yq -PC
$ oc get ingresscontroller -n openshift-ingress-operator <name> -o jsonpath="{.status.conditions}" | yq -PC
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 4. Load balancing with MetalLB Copy linkLink copied to clipboard!
4.1. Configuring MetalLB address pools Copy linkLink copied to clipboard!
As a cluster administrator, you can add, modify, and delete address pools. The MetalLB Operator uses the address pool custom resources to set the IP addresses that MetalLB can assign to services. The namespace used in the examples assume the namespace is metallb-system
.
For more information about how to install the MetalLB Operator, see About MetalLB and the MetalLB Operator.
4.1.1. About the IPAddressPool custom resource Copy linkLink copied to clipboard!
The fields for the IPAddressPool
custom resource are described in the following tables.
Field | Type | Description |
---|---|---|
|
|
Specifies the name for the address pool. When you add a service, you can specify this pool name in the |
|
| Specifies the namespace for the address pool. Specify the same namespace that the MetalLB Operator uses. |
|
|
Optional: Specifies the key value pair assigned to the |
|
| Specifies a list of IP addresses for MetalLB Operator to assign to services. You can specify multiple ranges in a single pool; they will all share the same settings. Specify each range in CIDR notation or as starting and ending IP addresses separated with a hyphen. |
|
|
Optional: Specifies whether MetalLB automatically assigns IP addresses from this pool. Specify Note
For IP address pool configurations, ensure the addresses field specifies only IPs that are available and not in use by other network devices, especially gateway addresses, to prevent conflicts when |
|
|
Optional: This ensures when enabled that IP addresses ending |
You can assign IP addresses from an IPAddressPool
to services and namespaces by configuring the spec.serviceAllocation
specification.
Field | Type | Description |
---|---|---|
|
| Optional: Defines the priority between IP address pools when more than one IP address pool matches a service or namespace. A lower number indicates a higher priority. |
|
| Optional: Specifies a list of namespaces that you can assign to IP addresses in an IP address pool. |
|
| Optional: Specifies namespace labels that you can assign to IP addresses from an IP address pool by using label selectors in a list format. |
|
| Optional: Specifies service labels that you can assign to IP addresses from an address pool by using label selectors in a list format. |
4.1.2. Configuring an address pool Copy linkLink copied to clipboard!
As a cluster administrator, you can add address pools to your cluster to control the IP addresses that MetalLB can assign to load-balancer services.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in as a user with
cluster-admin
privileges.
Procedure
Create a file, such as
ipaddresspool.yaml
, with content like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This label assigned to the
IPAddressPool
can be referenced by theipAddressPoolSelectors
in theBGPAdvertisement
CRD to associate theIPAddressPool
with the advertisement.
Apply the configuration for the IP address pool:
oc apply -f ipaddresspool.yaml
$ oc apply -f ipaddresspool.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
View the address pool by entering the following command:
oc describe -n metallb-system IPAddressPool doc-example
$ oc describe -n metallb-system IPAddressPool doc-example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Confirm that the address pool name, such as
doc-example
, and the IP address ranges exist in the output.
4.1.3. Configure MetalLB address pool for VLAN Copy linkLink copied to clipboard!
As a cluster administrator, you can add address pools to your cluster to control the IP addresses on a created VLAN that MetalLB can assign to load-balancer services
Prerequisites
-
Install the OpenShift CLI (
oc
). - Configure a separate VLAN.
-
Log in as a user with
cluster-admin
privileges.
Procedure
Create a file, such as
ipaddresspool-vlan.yaml
, that is similar to the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This label assigned to the
IPAddressPool
can be referenced by theipAddressPoolSelectors
in theBGPAdvertisement
CRD to associate theIPAddressPool
with the advertisement. - 2
- This IP range must match the subnet assigned to the VLAN on your network. To support layer 2 (L2) mode, the IP address range must be within the same subnet as the cluster nodes.
Apply the configuration for the IP address pool:
oc apply -f ipaddresspool-vlan.yaml
$ oc apply -f ipaddresspool-vlan.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To ensure this configuration applies to the VLAN you need to set the
spec
gatewayConfig.ipForwarding
toGlobal
.Run the following command to edit the network configuration custom resource (CR):
oc edit network.operator.openshift/cluster
$ oc edit network.operator.openshift/cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
spec.defaultNetwork.ovnKubernetesConfig
section to include thegatewayConfig.ipForwarding
set toGlobal
. It should look something like this:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.1.4. Example address pool configurations Copy linkLink copied to clipboard!
The following examples show address pool configurations for specific scenarios.
4.1.4.1. Example: IPv4 and CIDR ranges Copy linkLink copied to clipboard!
You can specify a range of IP addresses in classless inter-domain routing (CIDR) notation. You can combine CIDR notation with the notation that uses a hyphen to separate lower and upper bounds.
4.1.4.2. Example: Assign IP addresses Copy linkLink copied to clipboard!
You can set the autoAssign
field to false
to prevent MetalLB from automatically assigning IP addresses from the address pool. You can then assign a single IP address or multiple IP addresses from an IP address pool. To assign an IP address, append the /32
CIDR notation to the target IP address in the spec.addresses
parameter. This setting ensures that only the specific IP address is avilable for assignment, leaving non-reserved IP addresses for application use.
Example IPAddressPool
CR that assigns multiple IP addresses
When you add a service, you can request a specific IP address from the address pool or you can specify the pool name in an annotation to request any IP address from the pool.
4.1.4.3. Example: IPv4 and IPv6 addresses Copy linkLink copied to clipboard!
You can add address pools that use IPv4 and IPv6. You can specify multiple ranges in the addresses
list, just like several IPv4 examples.
Whether the service is assigned a single IPv4 address, a single IPv6 address, or both is determined by how you add the service. The spec.ipFamilies
and spec.ipFamilyPolicy
fields control how IP addresses are assigned to the service.
- 1
- Where
10.0.100.0/28
is the local network IP address followed by the/28
network prefix.
4.1.4.4. Example: Assign IP address pools to services or namespaces Copy linkLink copied to clipboard!
You can assign IP addresses from an IPAddressPool
to services and namespaces that you specify.
If you assign a service or namespace to more than one IP address pool, MetalLB uses an available IP address from the higher-priority IP address pool. If no IP addresses are available from the assigned IP address pools with a high priority, MetalLB uses available IP addresses from an IP address pool with lower priority or no priority.
You can use the matchLabels
label selector, the matchExpressions
label selector, or both, for the namespaceSelectors
and serviceSelectors
specifications. This example demonstrates one label selector for each specification.
- 1
- Assign a priority to the address pool. A lower number indicates a higher priority.
- 2
- Assign one or more namespaces to the IP address pool in a list format.
- 3
- Assign one or more namespace labels to the IP address pool by using label selectors in a list format.
- 4
- Assign one or more service labels to the IP address pool by using label selectors in a list format.
4.1.5. Next steps Copy linkLink copied to clipboard!
4.2. About advertising for the IP address pools Copy linkLink copied to clipboard!
You can configure MetalLB so that the IP address is advertised with layer 2 protocols, the BGP protocol, or both. With layer 2, MetalLB provides a fault-tolerant external IP address. With BGP, MetalLB provides fault-tolerance for the external IP address and load balancing.
MetalLB supports advertising using L2 and BGP for the same set of IP addresses.
MetalLB provides the flexibility to assign address pools to specific BGP peers effectively to a subset of nodes on the network. This allows for more complex configurations, for example facilitating the isolation of nodes or the segmentation of the network.
4.2.1. About the BGPAdvertisement custom resource Copy linkLink copied to clipboard!
The fields for the BGPAdvertisements
object are defined in the following table:
Field | Type | Description |
---|---|---|
|
| Specifies the name for the BGP advertisement. |
|
| Specifies the namespace for the BGP advertisement. Specify the same namespace that the MetalLB Operator uses. |
|
|
Optional: Specifies the number of bits to include in a 32-bit CIDR mask. To aggregate the routes that the speaker advertises to BGP peers, the mask is applied to the routes for several service IP addresses and the speaker advertises the aggregated route. For example, with an aggregation length of |
|
|
Optional: Specifies the number of bits to include in a 128-bit CIDR mask. For example, with an aggregation length of |
|
| Optional: Specifies one or more BGP communities. Each community is specified as two 16-bit values separated by the colon character. Well-known communities must be specified as 16-bit values:
|
|
| Optional: Specifies the local preference for this advertisement. This BGP attribute applies to BGP sessions within the Autonomous System. |
|
|
Optional: The list of |
|
|
Optional: A selector for the |
|
|
Optional: |
|
|
Optional: Use a list to specify the |
4.2.2. Configuring MetalLB with a BGP advertisement and a basic use case Copy linkLink copied to clipboard!
Configure MetalLB as follows so that the peer BGP routers receive one 203.0.113.200/32
route and one fc00:f853:ccd:e799::1/128
route for each load-balancer IP address that MetalLB assigns to a service. Because the localPref
and communities
fields are not specified, the routes are advertised with localPref
set to zero and no BGP communities.
4.2.2.1. Example: Advertise a basic address pool configuration with BGP Copy linkLink copied to clipboard!
Configure MetalLB as follows so that the IPAddressPool
is advertised with the BGP protocol.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in as a user with
cluster-admin
privileges.
Procedure
Create an IP address pool.
Create a file, such as
ipaddresspool.yaml
, with content like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration for the IP address pool:
oc apply -f ipaddresspool.yaml
$ oc apply -f ipaddresspool.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a BGP advertisement.
Create a file, such as
bgpadvertisement.yaml
, with content like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration:
oc apply -f bgpadvertisement.yaml
$ oc apply -f bgpadvertisement.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.3. Configuring MetalLB with a BGP advertisement and an advanced use case Copy linkLink copied to clipboard!
Configure MetalLB as follows so that MetalLB assigns IP addresses to load-balancer services in the ranges between 203.0.113.200
and 203.0.113.203
and between fc00:f853:ccd:e799::0
and fc00:f853:ccd:e799::f
.
To explain the two BGP advertisements, consider an instance when MetalLB assigns the IP address of 203.0.113.200
to a service. With that IP address as an example, the speaker advertises two routes to BGP peers:
-
203.0.113.200/32
, withlocalPref
set to100
and the community set to the numeric value of theNO_ADVERTISE
community. This specification indicates to the peer routers that they can use this route but they should not propagate information about this route to BGP peers. -
203.0.113.200/30
, aggregates the load-balancer IP addresses assigned by MetalLB into a single route. MetalLB advertises the aggregated route to BGP peers with the community attribute set to8000:800
. BGP peers propagate the203.0.113.200/30
route to other BGP peers. When traffic is routed to a node with a speaker, the203.0.113.200/32
route is used to forward the traffic into the cluster and to a pod that is associated with the service.
As you add more services and MetalLB assigns more load-balancer IP addresses from the pool, peer routers receive one local route, 203.0.113.20x/32
, for each service, as well as the 203.0.113.200/30
aggregate route. Each service that you add generates the /30
route, but MetalLB deduplicates the routes to one BGP advertisement before communicating with peer routers.
4.2.3.1. Example: Advertise an advanced address pool configuration with BGP Copy linkLink copied to clipboard!
Configure MetalLB as follows so that the IPAddressPool
is advertised with the BGP protocol.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in as a user with
cluster-admin
privileges.
Procedure
Create an IP address pool.
Create a file, such as
ipaddresspool.yaml
, with content like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration for the IP address pool:
oc apply -f ipaddresspool.yaml
$ oc apply -f ipaddresspool.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a BGP advertisement.
Create a file, such as
bgpadvertisement1.yaml
, with content like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration:
oc apply -f bgpadvertisement1.yaml
$ oc apply -f bgpadvertisement1.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a file, such as
bgpadvertisement2.yaml
, with content like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration:
oc apply -f bgpadvertisement2.yaml
$ oc apply -f bgpadvertisement2.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.4. Advertising an IP address pool from a subset of nodes Copy linkLink copied to clipboard!
To advertise an IP address from an IP addresses pool, from a specific set of nodes only, use the .spec.nodeSelector
specification in the BGPAdvertisement custom resource. This specification associates a pool of IP addresses with a set of nodes in the cluster. This is useful when you have nodes on different subnets in a cluster and you want to advertise an IP addresses from an address pool from a specific subnet, for example a public-facing subnet only.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in as a user with
cluster-admin
privileges.
Procedure
Create an IP address pool by using a custom resource:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Control which nodes in the cluster the IP address from
pool1
advertises from by defining the.spec.nodeSelector
value in the BGPAdvertisement custom resource:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
In this example, the IP address from pool1
advertises from NodeA
and NodeB
only.
4.2.5. About the L2Advertisement custom resource Copy linkLink copied to clipboard!
The fields for the l2Advertisements
object are defined in the following table:
Field | Type | Description |
---|---|---|
|
| Specifies the name for the L2 advertisement. |
|
| Specifies the namespace for the L2 advertisement. Specify the same namespace that the MetalLB Operator uses. |
|
|
Optional: The list of |
|
|
Optional: A selector for the |
|
|
Optional: Important Limiting the nodes to announce as next hops is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
|
|
Optional: The list of |
4.2.6. Configuring MetalLB with an L2 advertisement Copy linkLink copied to clipboard!
Configure MetalLB as follows so that the IPAddressPool
is advertised with the L2 protocol.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in as a user with
cluster-admin
privileges.
Procedure
Create an IP address pool.
Create a file, such as
ipaddresspool.yaml
, with content like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration for the IP address pool:
oc apply -f ipaddresspool.yaml
$ oc apply -f ipaddresspool.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a L2 advertisement.
Create a file, such as
l2advertisement.yaml
, with content like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration:
oc apply -f l2advertisement.yaml
$ oc apply -f l2advertisement.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.7. Configuring MetalLB with a L2 advertisement and label Copy linkLink copied to clipboard!
The ipAddressPoolSelectors
field in the BGPAdvertisement
and L2Advertisement
custom resource definitions is used to associate the IPAddressPool
to the advertisement based on the label assigned to the IPAddressPool
instead of the name itself.
This example shows how to configure MetalLB so that the IPAddressPool
is advertised with the L2 protocol by configuring the ipAddressPoolSelectors
field.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in as a user with
cluster-admin
privileges.
Procedure
Create an IP address pool.
Create a file, such as
ipaddresspool.yaml
, with content like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration for the IP address pool:
oc apply -f ipaddresspool.yaml
$ oc apply -f ipaddresspool.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a L2 advertisement advertising the IP using
ipAddressPoolSelectors
.Create a file, such as
l2advertisement.yaml
, with content like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration:
oc apply -f l2advertisement.yaml
$ oc apply -f l2advertisement.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.8. Configuring MetalLB with an L2 advertisement for selected interfaces Copy linkLink copied to clipboard!
By default, the IP addresses from IP address pool that has been assigned to the service, is advertised from all the network interfaces. The interfaces
field in the L2Advertisement
custom resource definition is used to restrict those network interfaces that advertise the IP address pool.
This example shows how to configure MetalLB so that the IP address pool is advertised only from the network interfaces listed in the interfaces
field of all nodes.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). -
You are logged in as a user with
cluster-admin
privileges.
Procedure
Create an IP address pool.
Create a file, such as
ipaddresspool.yaml
, and enter the configuration details like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration for the IP address pool like the following example:
oc apply -f ipaddresspool.yaml
$ oc apply -f ipaddresspool.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a L2 advertisement advertising the IP with
interfaces
selector.Create a YAML file, such as
l2advertisement.yaml
, and enter the configuration details like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration for the advertisement like the following example:
oc apply -f l2advertisement.yaml
$ oc apply -f l2advertisement.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The interface selector does not affect how MetalLB chooses the node to announce a given IP by using L2. The chosen node does not announce the service if the node does not have the selected interface.
4.2.9. Configuring MetalLB with secondary networks Copy linkLink copied to clipboard!
From OpenShift Container Platform 4.14 the default network behavior is to not allow forwarding of IP packets between network interfaces. Therefore, when MetalLB is configured on a secondary interface, you need to add a machine configuration to enable IP forwarding for only the required interfaces.
OpenShift Container Platform clusters upgraded from 4.13 are not affected because a global parameter is set during upgrade to enable global IP forwarding.
To enable IP forwarding for the secondary interface, you have two options:
- Enable IP forwarding for a specific interface.
Enable IP forwarding for all interfaces.
NoteEnabling IP forwarding for a specific interface provides more granular control, while enabling it for all interfaces applies a global setting.
4.2.9.1. Enabling IP forwarding for a specific interface Copy linkLink copied to clipboard!
Procedure
Patch the Cluster Network Operator, setting the parameter
routingViaHost
totrue
, by running the following command:oc patch network.operator cluster -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig": {"routingViaHost": true} }}}}' --type=merge
$ oc patch network.operator cluster -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig": {"routingViaHost": true} }}}}' --type=merge
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable forwarding for a specific secondary interface, such as
bridge-net
by creating and applying aMachineConfig
CR:Base64-encode the string that is used to configure network kernel parameters by running the following command on your local machine:
echo -e "net.ipv4.conf.bridge-net.forwarding = 1\nnet.ipv6.conf.bridge-net.forwarding = 1\nnet.ipv4.conf.bridge-net.rp_filter = 0\nnet.ipv6.conf.bridge-net.rp_filter = 0" | base64 -w0
$ echo -e "net.ipv4.conf.bridge-net.forwarding = 1\nnet.ipv6.conf.bridge-net.forwarding = 1\nnet.ipv4.conf.bridge-net.rp_filter = 0\nnet.ipv6.conf.bridge-net.rp_filter = 0" | base64 -w0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
bmV0LmlwdjQuY29uZi5icmlkZ2UtbmV0LmZvcndhcmRpbmcgPSAxCm5ldC5pcHY2LmNvbmYuYnJpZGdlLW5ldC5mb3J3YXJkaW5nID0gMQpuZXQuaXB2NC5jb25mLmJyaWRnZS1uZXQucnBfZmlsdGVyID0gMApuZXQuaXB2Ni5jb25mLmJyaWRnZS1uZXQucnBfZmlsdGVyID0gMAo=
bmV0LmlwdjQuY29uZi5icmlkZ2UtbmV0LmZvcndhcmRpbmcgPSAxCm5ldC5pcHY2LmNvbmYuYnJpZGdlLW5ldC5mb3J3YXJkaW5nID0gMQpuZXQuaXB2NC5jb25mLmJyaWRnZS1uZXQucnBfZmlsdGVyID0gMApuZXQuaXB2Ni5jb25mLmJyaWRnZS1uZXQucnBfZmlsdGVyID0gMAo=
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create the
MachineConfig
CR to enable IP forwarding for the specified secondary interface namedbridge-net
. Save the following YAML in the
enable-ip-forward.yaml
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration by running the following command:
oc apply -f enable-ip-forward.yaml
$ oc apply -f enable-ip-forward.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
After you apply the machine config, verify the changes by following this procedure:
Enter into a debug session on the target node by running the following command:
oc debug node/<node-name>
$ oc debug node/<node-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This step instantiates a debug pod called
<node-name>-debug
.Set
/host
as the root directory within the debug shell by running the following command:chroot /host
$ chroot /host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The debug pod mounts the host’s root file system in
/host
within the pod. By changing the root directory to/host
, you can run binaries contained in the host’s executable paths.Verify that IP forwarding is enabled by running the following command:
cat /etc/sysctl.d/enable-global-forwarding.conf
$ cat /etc/sysctl.d/enable-global-forwarding.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expected output
net.ipv4.conf.bridge-net.forwarding = 1 net.ipv6.conf.bridge-net.forwarding = 1 net.ipv4.conf.bridge-net.rp_filter = 0 net.ipv6.conf.bridge-net.rp_filter = 0
net.ipv4.conf.bridge-net.forwarding = 1 net.ipv6.conf.bridge-net.forwarding = 1 net.ipv4.conf.bridge-net.rp_filter = 0 net.ipv6.conf.bridge-net.rp_filter = 0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output indicates that IPv4 and IPv6 packet forwarding is enabled on the
bridge-net
interface.
4.2.9.2. Enabling IP forwarding globally Copy linkLink copied to clipboard!
- Enable IP forwarding globally by running the following command:
oc patch network.operator cluster -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"ipForwarding": "Global"}}}}}' --type=merge
$ oc patch network.operator cluster -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"ipForwarding": "Global"}}}}}' --type=merge
4.3. Configuring MetalLB BGP peers Copy linkLink copied to clipboard!
As a cluster administrator, you can add, modify, and delete Border Gateway Protocol (BGP) peers. The MetalLB Operator uses the BGP peer custom resources to identify which peers that MetalLB speaker
pods contact to start BGP sessions. The peers receive the route advertisements for the load-balancer IP addresses that MetalLB assigns to services.
4.3.1. About the BGP peer custom resource Copy linkLink copied to clipboard!
The fields for the BGP peer custom resource are described in the following table.
Field | Type | Description |
---|---|---|
|
| Specifies the name for the BGP peer custom resource. |
|
| Specifies the namespace for the BGP peer custom resource. |
|
|
Specifies the Autonomous System Number (ASN) for the local end of the BGP session. In all BGP peer custom resources that you add, specify the same value . The range is |
|
|
Specifies the ASN for the remote end of the BGP session. The range is |
|
|
Detects the ASN to use for the remote end of the session without explicitly setting it. Specify |
|
|
Specifies the IP address of the peer to contact for establishing the BGP session. If you use this field, you cannot specify a value in the |
|
|
Specifies the interface name to use when establishing a session. Use this field to configure unnumbered BGP peering. You must establish a point-to-point, layer 2 connection between the two BGP peers. You can use unnumbered BGP peering with IPv4, IPv6, or dual-stack, but you must enable IPv6 RAs (Router Advertisements). Each interface is limited to one BGP connection. If you use this field, you cannot specify a value in the |
|
| Optional: Specifies the IP address to use when establishing the BGP session. The value must be an IPv4 address. |
|
|
Optional: Specifies the network port of the peer to contact for establishing the BGP session. The range is |
|
|
Optional: Specifies the duration for the hold time to propose to the BGP peer. The minimum value is 3 seconds ( |
|
|
Optional: Specifies the maximum interval between sending keep-alive messages to the BGP peer. If you specify this field, you must also specify a value for the |
|
| Optional: Specifies the router ID to advertise to the BGP peer. If you specify this field, you must specify the same value in every BGP peer custom resource that you add. |
|
| Optional: Specifies the MD5 password to send to the peer for routers that enforce TCP MD5 authenticated BGP sessions. |
|
|
Optional: Specifies name of the authentication secret for the BGP Peer. The secret must live in the |
|
| Optional: Specifies the name of a BFD profile. |
|
| Optional: Specifies a selector, using match expressions and match labels, to control which nodes can connect to the BGP peer. |
|
|
Optional: Specifies that the BGP peer is multiple network hops away. If the BGP peer is not directly connected to the same network, the speaker cannot establish a BGP session unless this field is set to |
|
| Specifies how long BGP waits between connection attempts to a neighbor. |
The passwordSecret
field is mutually exclusive with the password
field, and contains a reference to a secret containing the password to use. Setting both fields results in a failure of the parsing.
4.3.2. Configuring a BGP peer Copy linkLink copied to clipboard!
As a cluster administrator, you can add a BGP peer custom resource to exchange routing information with network routers and advertise the IP addresses for services.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in as a user with
cluster-admin
privileges. - Configure MetalLB with a BGP advertisement.
Procedure
Create a file, such as
bgppeer.yaml
, with content like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration for the BGP peer:
oc apply -f bgppeer.yaml
$ oc apply -f bgppeer.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3.3. Configure a specific set of BGP peers for a given address pool Copy linkLink copied to clipboard!
This procedure illustrates how to:
-
Configure a set of address pools (
pool1
andpool2
). -
Configure a set of BGP peers (
peer1
andpeer2
). -
Configure BGP advertisement to assign
pool1
topeer1
andpool2
topeer2
.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in as a user with
cluster-admin
privileges.
Procedure
Create address pool
pool1
.Create a file, such as
ipaddresspool1.yaml
, with content like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration for the IP address pool
pool1
:oc apply -f ipaddresspool1.yaml
$ oc apply -f ipaddresspool1.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create address pool
pool2
.Create a file, such as
ipaddresspool2.yaml
, with content like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration for the IP address pool
pool2
:oc apply -f ipaddresspool2.yaml
$ oc apply -f ipaddresspool2.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create BGP
peer1
.Create a file, such as
bgppeer1.yaml
, with content like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration for the BGP peer:
oc apply -f bgppeer1.yaml
$ oc apply -f bgppeer1.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create BGP
peer2
.Create a file, such as
bgppeer2.yaml
, with content like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration for the BGP peer2:
oc apply -f bgppeer2.yaml
$ oc apply -f bgppeer2.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create BGP advertisement 1.
Create a file, such as
bgpadvertisement1.yaml
, with content like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration:
oc apply -f bgpadvertisement1.yaml
$ oc apply -f bgpadvertisement1.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create BGP advertisement 2.
Create a file, such as
bgpadvertisement2.yaml
, with content like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration:
oc apply -f bgpadvertisement2.yaml
$ oc apply -f bgpadvertisement2.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3.4. Exposing a service through a network VRF Copy linkLink copied to clipboard!
You can expose a service through a virtual routing and forwarding (VRF) instance by associating a VRF on a network interface with a BGP peer.
Exposing a service through a VRF on a BGP peer is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
By using a VRF on a network interface to expose a service through a BGP peer, you can segregate traffic to the service, configure independent routing decisions, and enable multi-tenancy support on a network interface.
By establishing a BGP session through an interface belonging to a network VRF, MetalLB can advertise services through that interface and enable external traffic to reach the service through this interface. However, the network VRF routing table is different from the default VRF routing table used by OVN-Kubernetes. Therefore, the traffic cannot reach the OVN-Kubernetes network infrastructure.
To enable the traffic directed to the service to reach the OVN-Kubernetes network infrastructure, you must configure routing rules to define the next hops for network traffic. See the NodeNetworkConfigurationPolicy
resource in "Managing symmetric routing with MetalLB" in the Additional resources section for more information.
These are the high-level steps to expose a service through a network VRF with a BGP peer:
- Define a BGP peer and add a network VRF instance.
- Specify an IP address pool for MetalLB.
- Configure a BGP route advertisement for MetalLB to advertise a route using the specified IP address pool and the BGP peer associated with the VRF instance.
- Deploy a service to test the configuration.
Prerequisites
-
You installed the OpenShift CLI (
oc
). -
You logged in as a user with
cluster-admin
privileges. -
You defined a
NodeNetworkConfigurationPolicy
to associate a Virtual Routing and Forwarding (VRF) instance with a network interface. For more information about completing this prerequisite, see the Additional resources section. - You installed MetalLB on your cluster.
Procedure
Create a
BGPPeer
custom resources (CR):Create a file, such as
frrviavrf.yaml
, with content like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specifies the network VRF instance to associate with the BGP peer. MetalLB can advertise services and make routing decisions based on the routing information in the VRF.
NoteYou must configure this network VRF instance in a
NodeNetworkConfigurationPolicy
CR. See the Additional resources for more information.Apply the configuration for the BGP peer by running the following command:
oc apply -f frrviavrf.yaml
$ oc apply -f frrviavrf.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create an
IPAddressPool
CR:Create a file, such as
first-pool.yaml
, with content like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration for the IP address pool by running the following command:
oc apply -f first-pool.yaml
$ oc apply -f first-pool.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a
BGPAdvertisement
CR:Create a file, such as
first-adv.yaml
, with content like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- In this example, MetalLB advertises a range of IP addresses from the
first-pool
IP address pool to thefrrviavrf
BGP peer.
Apply the configuration for the BGP advertisement by running the following command:
oc apply -f first-adv.yaml
$ oc apply -f first-adv.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a
Namespace
,Deployment
, andService
CR:Create a file, such as
deploy-service.yaml
, with content like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration for the namespace, deployment, and service by running the following command:
oc apply -f deploy-service.yaml
$ oc apply -f deploy-service.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Identify a MetalLB speaker pod by running the following command:
oc get -n metallb-system pods -l component=speaker
$ oc get -n metallb-system pods -l component=speaker
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE speaker-c6c5f 6/6 Running 0 69m
NAME READY STATUS RESTARTS AGE speaker-c6c5f 6/6 Running 0 69m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the state of the BGP session is
Established
in the speaker pod by running the following command, replacing the variables to match your configuration:oc exec -n metallb-system <speaker_pod> -c frr -- vtysh -c "show bgp vrf <vrf_name> neigh"
$ oc exec -n metallb-system <speaker_pod> -c frr -- vtysh -c "show bgp vrf <vrf_name> neigh"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
BGP neighbor is 192.168.30.1, remote AS 200, local AS 100, external link BGP version 4, remote router ID 192.168.30.1, local router ID 192.168.30.71 BGP state = Established, up for 04:20:09 ...
BGP neighbor is 192.168.30.1, remote AS 200, local AS 100, external link BGP version 4, remote router ID 192.168.30.1, local router ID 192.168.30.71 BGP state = Established, up for 04:20:09 ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the service is advertised correctly by running the following command:
oc exec -n metallb-system <speaker_pod> -c frr -- vtysh -c "show bgp vrf <vrf_name> ipv4"
$ oc exec -n metallb-system <speaker_pod> -c frr -- vtysh -c "show bgp vrf <vrf_name> ipv4"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3.5. Example BGP peer configurations Copy linkLink copied to clipboard!
4.3.5.1. Example: Limit which nodes connect to a BGP peer Copy linkLink copied to clipboard!
You can specify the node selectors field to control which nodes can connect to a BGP peer.
4.3.5.2. Example: Specify a BFD profile for a BGP peer Copy linkLink copied to clipboard!
You can specify a BFD profile to associate with BGP peers. BFD compliments BGP by providing more rapid detection of communication failures between peers than BGP alone.
Deleting the bidirectional forwarding detection (BFD) profile and removing the bfdProfile
added to the border gateway protocol (BGP) peer resource does not disable the BFD. Instead, the BGP peer starts using the default BFD profile. To disable BFD from a BGP peer resource, delete the BGP peer configuration and recreate it without a BFD profile. For more information, see BZ#2050824.
4.3.5.3. Example: Specify BGP peers for dual-stack networking Copy linkLink copied to clipboard!
To support dual-stack networking, add one BGP peer custom resource for IPv4 and one BGP peer custom resource for IPv6.
4.3.5.4. Example: Specify BGP peers for unnumbered BGP peering Copy linkLink copied to clipboard!
The spec.interface
field is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
To configure unnumbered BGP peering, specify the interface in the spec.interface
field by using the following example configuration:
To use the interface
field, you must establish a point-to-point, layer 2 connection between the two BGP peers. You can use unnumbered BGP peering with IPv4, IPv6, or dual-stack, but you must enable IPv6 RAs (Router Advertisements). Each interface is limited to one BGP connection.
If you use this field, you cannot specify a value in the spec.bgp.routers.neighbors.address
field.
4.3.6. Next steps Copy linkLink copied to clipboard!
4.4. Configuring community alias Copy linkLink copied to clipboard!
As a cluster administrator, you can configure a community alias and use it across different advertisements.
4.4.1. About the community custom resource Copy linkLink copied to clipboard!
The community
custom resource is a collection of aliases for communities. Users can define named aliases to be used when advertising ipAddressPools
using the BGPAdvertisement
. The fields for the community
custom resource are described in the following table.
The community
CRD applies only to BGPAdvertisement.
Field | Type | Description |
---|---|---|
|
|
Specifies the name for the |
|
|
Specifies the namespace for the |
|
|
Specifies a list of BGP community aliases that can be used in BGPAdvertisements. A community alias consists of a pair of name (alias) and value (number:number). Link the BGPAdvertisement to a community alias by referring to the alias name in its |
Field | Type | Description |
---|---|---|
|
|
The name of the alias for the |
|
|
The BGP |
4.4.2. Configuring MetalLB with a BGP advertisement and community alias Copy linkLink copied to clipboard!
Configure MetalLB as follows so that the IPAddressPool
is advertised with the BGP protocol and the community alias set to the numeric value of the NO_ADVERTISE community.
In the following example, the peer BGP router doc-example-peer-community
receives one 203.0.113.200/32
route and one fc00:f853:ccd:e799::1/128
route for each load-balancer IP address that MetalLB assigns to a service. A community alias is configured with the NO_ADVERTISE
community.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in as a user with
cluster-admin
privileges.
Procedure
Create an IP address pool.
Create a file, such as
ipaddresspool.yaml
, with content like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration for the IP address pool:
oc apply -f ipaddresspool.yaml
$ oc apply -f ipaddresspool.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a community alias named
community1
.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a BGP peer named
doc-example-bgp-peer
.Create a file, such as
bgppeer.yaml
, with content like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration for the BGP peer:
oc apply -f bgppeer.yaml
$ oc apply -f bgppeer.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a BGP advertisement with the community alias.
Create a file, such as
bgpadvertisement.yaml
, with content like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the
CommunityAlias.name
here and not the community custom resource (CR) name.
Apply the configuration:
oc apply -f bgpadvertisement.yaml
$ oc apply -f bgpadvertisement.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.5. Configuring MetalLB BFD profiles Copy linkLink copied to clipboard!
As a cluster administrator, you can add, modify, and delete Bidirectional Forwarding Detection (BFD) profiles. The MetalLB Operator uses the BFD profile custom resources to identify which BGP sessions use BFD to provide faster path failure detection than BGP alone provides.
4.5.1. About the BFD profile custom resource Copy linkLink copied to clipboard!
The fields for the BFD profile custom resource are described in the following table.
Field | Type | Description |
---|---|---|
|
| Specifies the name for the BFD profile custom resource. |
|
| Specifies the namespace for the BFD profile custom resource. |
|
| Specifies the detection multiplier to determine packet loss. The remote transmission interval is multiplied by this value to determine the connection loss detection timer.
For example, when the local system has the detect multiplier set to
The range is |
|
|
Specifies the echo transmission mode. If you are not using distributed BFD, echo transmission mode works only when the peer is also FRR. The default value is
When echo transmission mode is enabled, consider increasing the transmission interval of control packets to reduce bandwidth usage. For example, consider increasing the transmit interval to |
|
|
Specifies the minimum transmission interval, less jitter, that this system uses to send and receive echo packets. The range is |
|
| Specifies the minimum expected TTL for an incoming control packet. This field applies to multi-hop sessions only. The purpose of setting a minimum TTL is to make the packet validation requirements more stringent and avoid receiving control packets from other sessions.
The default value is |
|
| Specifies whether a session is marked as active or passive. A passive session does not attempt to start the connection. Instead, a passive session waits for control packets from a peer before it begins to reply. Marking a session as passive is useful when you have a router that acts as the central node of a star network and you want to avoid sending control packets that you do not need the system to send.
The default value is |
|
|
Specifies the minimum interval that this system is capable of receiving control packets. The range is |
|
|
Specifies the minimum transmission interval, less jitter, that this system uses to send control packets. The range is |
4.5.2. Configuring a BFD profile Copy linkLink copied to clipboard!
As a cluster administrator, you can add a BFD profile and configure a BGP peer to use the profile. BFD provides faster path failure detection than BGP alone.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in as a user with
cluster-admin
privileges.
Procedure
Create a file, such as
bfdprofile.yaml
, with content like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration for the BFD profile:
oc apply -f bfdprofile.yaml
$ oc apply -f bfdprofile.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.5.3. Next steps Copy linkLink copied to clipboard!
- Configure a BGP peer to use the BFD profile.
4.6. Configuring services to use MetalLB Copy linkLink copied to clipboard!
As a cluster administrator, when you add a service of type LoadBalancer
, you can control how MetalLB assigns an IP address.
4.6.1. Request a specific IP address Copy linkLink copied to clipboard!
Like some other load-balancer implementations, MetalLB accepts the spec.loadBalancerIP
field in the service specification.
If the requested IP address is within a range from any address pool, MetalLB assigns the requested IP address. If the requested IP address is not within any range, MetalLB reports a warning.
Example service YAML for a specific IP address
If MetalLB cannot assign the requested IP address, the EXTERNAL-IP
for the service reports <pending>
and running oc describe service <service_name>
includes an event like the following example.
Example event when MetalLB cannot assign a requested IP address
... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning AllocationFailed 3m16s metallb-controller Failed to allocate IP for "default/invalid-request": "4.3.2.1" is not allowed in config
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning AllocationFailed 3m16s metallb-controller Failed to allocate IP for "default/invalid-request": "4.3.2.1" is not allowed in config
4.6.2. Request an IP address from a specific pool Copy linkLink copied to clipboard!
To assign an IP address from a specific range, but you are not concerned with the specific IP address, then you can use the metallb.io/address-pool
annotation to request an IP address from the specified address pool.
Example service YAML for an IP address from a specific pool
If the address pool that you specify for <address_pool_name>
does not exist, MetalLB attempts to assign an IP address from any pool that permits automatic assignment.
4.6.3. Accept any IP address Copy linkLink copied to clipboard!
By default, address pools are configured to permit automatic assignment. MetalLB assigns an IP address from these address pools.
To accept any IP address from any pool that is configured for automatic assignment, no special annotation or configuration is required.
Example service YAML for accepting any IP address
4.6.5. Configuring a service with MetalLB Copy linkLink copied to clipboard!
You can configure a load-balancing service to use an external IP address from an address pool.
Prerequisites
-
Install the OpenShift CLI (
oc
). - Install the MetalLB Operator and start MetalLB.
- Configure at least one address pool.
- Configure your network to route traffic from the clients to the host network for the cluster.
Procedure
Create a
<service_name>.yaml
file. In the file, ensure that thespec.type
field is set toLoadBalancer
.Refer to the examples for information about how to request the external IP address that MetalLB assigns to the service.
Create the service:
oc apply -f <service_name>.yaml
$ oc apply -f <service_name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
service/<service_name> created
service/<service_name> created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Describe the service:
oc describe service <service_name>
$ oc describe service <service_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The annotation is present if you request an IP address from a specific pool.
- 2
- The service type must indicate
LoadBalancer
. - 3
- The load-balancer ingress field indicates the external IP address if the service is assigned correctly.
- 4
- The events field indicates the node name that is assigned to announce the external IP address. If you experience an error, the events field indicates the reason for the error.
4.7. Managing symmetric routing with MetalLB Copy linkLink copied to clipboard!
As a cluster administrator, you can effectively manage traffic for pods behind a MetalLB load-balancer service with multiple host interfaces by implementing features from MetalLB, NMState, and OVN-Kubernetes. By combining these features in this context, you can provide symmetric routing, traffic segregation, and support clients on different networks with overlapping CIDR addresses.
To achieve this functionality, learn how to implement virtual routing and forwarding (VRF) instances with MetalLB, and configure egress services.
Configuring symmetric traffic by using a VRF instance with MetalLB and an egress service is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
4.7.1. Challenges of managing symmetric routing with MetalLB Copy linkLink copied to clipboard!
When you use MetalLB with multiple host interfaces, MetalLB exposes and announces a service through all available interfaces on the host. This can present challenges relating to network isolation, asymmetric return traffic and overlapping CIDR addresses.
One option to ensure that return traffic reaches the correct client is to use static routes. However, with this solution, MetalLB cannot isolate the services and then announce each service through a different interface. Additionally, static routing requires manual configuration and requires maintenance if remote sites are added.
A further challenge of symmetric routing when implementing a MetalLB service is scenarios where external systems expect the source and destination IP address for an application to be the same. The default behavior for OpenShift Container Platform is to assign the IP address of the host network interface as the source IP address for traffic originating from pods. This is problematic with multiple host interfaces.
You can overcome these challenges by implementing a configuration that combines features from MetalLB, NMState, and OVN-Kubernetes.
4.7.2. Overview of managing symmetric routing by using VRFs with MetalLB Copy linkLink copied to clipboard!
You can overcome the challenges of implementing symmetric routing by using NMState to configure a VRF instance on a host, associating the VRF instance with a MetalLB BGPPeer
resource, and configuring an egress service for egress traffic with OVN-Kubernetes.
Figure 4.1. Network overview of managing symmetric routing by using VRFs with MetalLB
The configuration process involves three stages:
1. Define a VRF and routing rules
-
Configure a
NodeNetworkConfigurationPolicy
custom resource (CR) to associate a VRF instance with a network interface. - Use the VRF routing table to direct ingress and egress traffic.
2. Link the VRF to a MetalLB BGPPeer
-
Configure a MetalLB
BGPPeer
resource to use the VRF instance on a network interface. -
By associating the
BGPPeer
resource with the VRF instance, the designated network interface becomes the primary interface for the BGP session, and MetalLB advertises the services through this interface.
3. Configure an egress service
- Configure an egress service to choose the network associated with the VRF instance for egress traffic.
- Optional: Configure an egress service to use the IP address of the MetalLB load-balancer service as the source IP for egress traffic.
4.7.3. Configuring symmetric routing by using VRFs with MetalLB Copy linkLink copied to clipboard!
You can configure symmetric network routing for applications behind a MetalLB service that require the same ingress and egress network paths.
This example associates a VRF routing table with MetalLB and an egress service to enable symmetric routing for ingress and egress traffic for pods behind a LoadBalancer
service.
-
If you use the
sourceIPBy: "LoadBalancerIP"
setting in theEgressService
CR, you must specify the load-balancer node in theBGPAdvertisement
custom resource (CR). -
You can use the
sourceIPBy: "Network"
setting on clusters that use OVN-Kubernetes configured with thegatewayConfig.routingViaHost
specification set totrue
only. Additionally, if you use thesourceIPBy: "Network"
setting, you must schedule the application workload on nodes configured with the network VRF instance.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in as a user with
cluster-admin
privileges. - Install the Kubernetes NMState Operator.
- Install the MetalLB Operator.
Procedure
Create a
NodeNetworkConfigurationPolicy
CR to define the VRF instance:Create a file, such as
node-network-vrf.yaml
, with content like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the policy.
- 2
- This example applies the policy to all nodes with the label
vrf:true
. - 3
- The name of the interface.
- 4
- The type of interface. This example creates a VRF instance.
- 5
- The node interface that the VRF attaches to.
- 6
- The name of the route table ID for the VRF.
- 7
- The IPv4 address of the interface associated with the VRF.
- 8
- Defines the configuration for network routes. The
next-hop-address
field defines the IP address of the next hop for the route. Thenext-hop-interface
field defines the outgoing interface for the route. In this example, the VRF routing table is2
, which references the ID that you define in theEgressService
CR. - 9
- Defines additional route rules. The
ip-to
fields must match theCluster Network
CIDR,Service Network
CIDR, andInternal Masquerade
subnet CIDR. You can view the values for these CIDR address specifications by running the following command:oc describe network.operator/cluster
. - 10
- The main routing table that the Linux kernel uses when calculating routes has the ID
254
.
Apply the policy by running the following command:
oc apply -f node-network-vrf.yaml
$ oc apply -f node-network-vrf.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a
BGPPeer
custom resource (CR):Create a file, such as
frr-via-vrf.yaml
, with content like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specifies the VRF instance to associate with the BGP peer. MetalLB can advertise services and make routing decisions based on the routing information in the VRF.
Apply the configuration for the BGP peer by running the following command:
oc apply -f frr-via-vrf.yaml
$ oc apply -f frr-via-vrf.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create an
IPAddressPool
CR:Create a file, such as
first-pool.yaml
, with content like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration for the IP address pool by running the following command:
oc apply -f first-pool.yaml
$ oc apply -f first-pool.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a
BGPAdvertisement
CR:Create a file, such as
first-adv.yaml
, with content like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- In this example, MetalLB advertises a range of IP addresses from the
first-pool
IP address pool to thefrrviavrf
BGP peer. - 2
- In this example, the
EgressService
CR configures the source IP address for egress traffic to use the load-balancer service IP address. Therefore, you must specify the load-balancer node for return traffic to use the same return path for the traffic originating from the pod.
Apply the configuration for the BGP advertisement by running the following command:
oc apply -f first-adv.yaml
$ oc apply -f first-adv.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create an
EgressService
CR:Create a file, such as
egress-service.yaml
, with content like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name for the egress service. The name of the
EgressService
resource must match the name of the load-balancer service that you want to modify. - 2
- Specify the namespace for the egress service. The namespace for the
EgressService
must match the namespace of the load-balancer service that you want to modify. The egress service is namespace-scoped. - 3
- This example assigns the
LoadBalancer
service ingress IP address as the source IP address for egress traffic. - 4
- If you specify
LoadBalancer
for thesourceIPBy
specification, a single node handles theLoadBalancer
service traffic. In this example, only a node with the labelvrf: "true"
can handle the service traffic. If you do not specify a node, OVN-Kubernetes selects a worker node to handle the service traffic. When a node is selected, OVN-Kubernetes labels the node in the following format:egress-service.k8s.ovn.org/<svc_namespace>-<svc_name>: ""
. - 5
- Specify the routing table ID for egress traffic. Ensure that the value matches the
route-table-id
ID defined in theNodeNetworkConfigurationPolicy
resource, for example,route-table-id: 2
.
Apply the configuration for the egress service by running the following command:
oc apply -f egress-service.yaml
$ oc apply -f egress-service.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that you can access the application endpoint of the pods running behind the MetalLB service by running the following command:
curl <external_ip_address>:<port_number>
$ curl <external_ip_address>:<port_number>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Update the external IP address and port number to suit your application endpoint.
-
Optional: If you assigned the
LoadBalancer
service ingress IP address as the source IP address for egress traffic, verify this configuration by using tools such astcpdump
to analyze packets received at the external client.
4.8. Configuring the integration of MetalLB and FRR-K8s Copy linkLink copied to clipboard!
FRRouting (FRR) is a free, open source internet routing protocol suite for Linux and UNIX platforms. FRR-K8s
is a Kubernetes based DaemonSet that exposes a subset of the FRR
API in a Kubernetes-compliant manner. As a cluster administrator, you can use the FRRConfiguration
custom resource (CR) to access some of the FRR services not provided by MetalLB, for example, receiving routes. MetalLB
generates the FRR-K8s
configuration corresponding to the MetalLB configuration applied.
When configuring Virtual Route Forwarding (VRF) users must change their VRFs to a table ID lower than 1000 as higher than 1000 is reserved for OpenShift Container Platform.
4.8.1. FRR configurations Copy linkLink copied to clipboard!
You can create multiple FRRConfiguration
CRs to use FRR
services in MetalLB
. MetalLB
generates an FRRConfiguration
object which FRR-K8s
merges with all other configurations that all users have created.
For example, you can configure FRR-K8s
to receive all of the prefixes advertised by a given neighbor. The following example configures FRR-K8s
to receive all of the prefixes advertised by a BGPPeer
with host 172.18.0.5
:
Example FRRConfiguration CR
You can also configure FRR-K8s to always block a set of prefixes, regardless of the configuration applied. This can be useful to avoid routes towards the pods or ClusterIPs
CIDRs that might result in cluster malfunctions. The following example blocks the set of prefixes 192.168.1.0/24
:
Example MetalLB CR
You can set FRR-K8s
to block the Cluster Network
CIDR and Service Network
CIDR. You can view the values for these CIDR address specifications by running the following command:
oc describe network.config/cluster
$ oc describe network.config/cluster
4.8.2. Configuring the FRRConfiguration CRD Copy linkLink copied to clipboard!
The following section provides reference examples that use the FRRConfiguration
custom resource (CR).
4.8.2.1. The routers field Copy linkLink copied to clipboard!
You can use the routers
field to configure multiple routers, one for each Virtual Routing and Forwarding (VRF) resource. For each router, you must define the Autonomous System Number (ASN).
You can also define a list of Border Gateway Protocol (BGP) neighbors to connect to, as in the following example:
Example FRRConfiguration CR
4.8.2.2. The toAdvertise field Copy linkLink copied to clipboard!
By default, FRR-K8s
does not advertise the prefixes configured as part of a router configuration. In order to advertise them, you use the toAdvertise
field.
You can advertise a subset of the prefixes, as in the following example:
Example FRRConfiguration CR
- 1
- Advertises a subset of prefixes.
The following example shows you how to advertise all of the prefixes:
Example FRRConfiguration CR
- 1
- Advertises all prefixes.
4.8.2.3. The toReceive field Copy linkLink copied to clipboard!
By default, FRR-K8s
does not process any prefixes advertised by a neighbor. You can use the toReceive
field to process such addresses.
You can configure for a subset of the prefixes, as in this example:
Example FRRConfiguration CR
The following example configures FRR to handle all the prefixes announced:
Example FRRConfiguration CR
4.8.2.4. The bgp field Copy linkLink copied to clipboard!
You can use the bgp
field to define various BFD
profiles and associate them with a neighbor. In the following example, BFD
backs up the BGP
session and FRR
can detect link failures:
Example FRRConfiguration CR
4.8.2.5. The nodeSelector field Copy linkLink copied to clipboard!
By default, FRR-K8s
applies the configuration to all nodes where the daemon is running. You can use the nodeSelector
field to specify the nodes to which you want to apply the configuration. For example:
Example FRRConfiguration CR
4.8.2.6. The interface field Copy linkLink copied to clipboard!
The spec.bgp.routers.neighbors.interface
field is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
You can use the interface
field to configure unnumbered BGP peering by using the following example configuration:
Example FRRConfiguration
CR
- 1
- Activates unnumbered BGP peering.
To use the interface
field, you must establish a point-to-point, layer 2 connection between the two BGP peers. You can use unnumbered BGP peering with IPv4, IPv6, or dual-stack, but you must enable IPv6 RAs (Router Advertisements). Each interface is limited to one BGP connection.
If you use this field, you cannot specify a value in the spec.bgp.routers.neighbors.address
field.
The fields for the FRRConfiguration
custom resource are described in the following table:
Field | Type | Description |
---|---|---|
|
| Specifies the routers that FRR is to configure (one per VRF). |
|
| The Autonomous System Number (ASN) to use for the local end of the session. |
|
|
Specifies the ID of the |
|
| Specifies the host vrf used to establish sessions from this router. |
|
| Specifies the neighbors to establish BGP sessions with. |
|
|
Specifies the ASN to use for the remote end of the session. If you use this field, you cannot specify a value in the |
|
|
Detects the ASN to use for the remote end of the session without explicitly setting it. Specify |
|
|
Specifies the IP address to establish the session with. If you use this field, you cannot specify a value in the |
|
|
Specifies the interface name to use when establishing a session. Use this field to configure unnumbered BGP peering. There must be a point-to-point, layer 2 connection between the two BGP peers. You can use unnumbered BGP peering with IPv4, IPv6, or dual-stack, but you must enable IPv6 RAs (Router Advertisements). Each interface is limited to one BGP connection. The |
|
| Specifies the port to dial when establishing the session. Defaults to 179. |
|
|
Specifies the password to use for establishing the BGP session. |
|
|
Specifies the name of the authentication secret for the neighbor. The secret must be of type "kubernetes.io/basic-auth", and in the same namespace as the FRR-K8s daemon. The key "password" stores the password in the secret. |
|
| Specifies the requested BGP hold time, per RFC4271. Defaults to 180s. |
|
|
Specifies the requested BGP keepalive time, per RFC4271. Defaults to |
|
| Specifies how long BGP waits between connection attempts to a neighbor. |
|
| Indicates if the BGPPeer is multi-hops away. |
|
| Specifies the name of the BFD Profile to use for the BFD session associated with the BGP session. If not set, the BFD session is not set up. |
|
| Represents the list of prefixes to advertise to a neighbor, and the associated properties. |
|
| Specifies the list of prefixes to advertise to a neighbor. This list must match the prefixes that you define in the router. |
|
|
Specifies the mode to use when handling the prefixes. You can set to |
|
| Specifies the prefixes associated with an advertised local preference. You must specify the prefixes associated with a local preference in the prefixes allowed to be advertised. |
|
| Specifies the prefixes associated with the local preference. |
|
| Specifies the local preference associated with the prefixes. |
|
| Specifies the prefixes associated with an advertised BGP community. You must include the prefixes associated with a local preference in the list of prefixes that you want to advertise. |
|
| Specifies the prefixes associated with the community. |
|
| Specifies the community associated with the prefixes. |
|
| Specifies the prefixes to receive from a neighbor. |
|
| Specifies the information that you want to receive from a neighbor. |
|
| Specifies the prefixes allowed from a neighbor. |
|
|
Specifies the mode to use when handling the prefixes. When set to |
|
| Disables MP BGP to prevent it from separating IPv4 and IPv6 route exchanges into distinct BGP sessions. |
|
| Specifies all prefixes to advertise from this router instance. |
|
| Specifies the list of bfd profiles to use when configuring the neighbors. |
|
| The name of the BFD Profile to be referenced in other parts of the configuration. |
|
|
Specifies the minimum interval at which this system can receive control packets, in milliseconds. Defaults to |
|
|
Specifies the minimum transmission interval, excluding jitter, that this system wants to use to send BFD control packets, in milliseconds. Defaults to |
|
| Configures the detection multiplier to determine packet loss. To determine the connection loss-detection timer, multiply the remote transmission interval by this value. |
|
|
Configures the minimal echo receive transmission-interval that this system can handle, in milliseconds. Defaults to |
|
| Enables or disables the echo transmission mode. This mode is disabled by default, and not supported on multihop setups. |
|
| Mark session as passive. A passive session does not attempt to start the connection and waits for control packets from peers before it begins replying. |
|
| For multihop sessions only. Configures the minimum expected TTL for an incoming BFD control packet. |
|
| Limits the nodes that attempt to apply this configuration. If specified, only those nodes whose labels match the specified selectors attempt to apply the configuration. If it is not specified, all nodes attempt to apply this configuration. |
|
| Defines the observed state of FRRConfiguration. |
4.8.3. How FRR-K8s merges multiple configurations Copy linkLink copied to clipboard!
In a case where multiple users add configurations that select the same node, FRR-K8s
merges the configurations. Each configuration can only extend others. This means that it is possible to add a new neighbor to a router, or to advertise an additional prefix to a neighbor, but not possible to remove a component added by another configuration.
4.8.3.1. Configuration conflicts Copy linkLink copied to clipboard!
Certain configurations can cause conflicts, leading to errors, for example:
- different ASN for the same router (in the same VRF)
- different ASN for the same neighbor (with the same IP / port)
- multiple BFD profiles with the same name but different values
When the daemon finds an invalid configuration for a node, it reports the configuration as invalid and reverts to the previous valid FRR
configuration.
4.8.3.2. Merging Copy linkLink copied to clipboard!
When merging, it is possible to do the following actions:
- Extend the set of IPs that you want to advertise to a neighbor.
- Add an extra neighbor with its set of IPs.
- Extend the set of IPs to which you want to associate a community.
- Allow incoming routes for a neighbor.
Each configuration must be self contained. This means, for example, that it is not possible to allow prefixes that are not defined in the router section by leveraging prefixes coming from another configuration.
If the configurations to be applied are compatible, merging works as follows:
-
FRR-K8s
combines all the routers. -
FRR-K8s
merges all prefixes and neighbors for each router. -
FRR-K8s
merges all filters for each neighbor.
A less restrictive filter has precedence over a stricter one. For example, a filter accepting some prefixes has precedence over a filter not accepting any, and a filter accepting all prefixes has precedence over one that accepts some.
4.9. MetalLB logging, troubleshooting, and support Copy linkLink copied to clipboard!
If you need to troubleshoot MetalLB configuration, see the following sections for commonly used commands.
4.9.1. Setting the MetalLB logging levels Copy linkLink copied to clipboard!
MetalLB uses FRRouting (FRR) in a container with the default setting of info
generates a lot of logging. You can control the verbosity of the logs generated by setting the logLevel
as illustrated in this example.
Gain a deeper insight into MetalLB by setting the logLevel
to debug
as follows:
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
Create a file, such as
setdebugloglevel.yaml
, with content like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration:
oc replace -f setdebugloglevel.yaml
$ oc replace -f setdebugloglevel.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteUse
oc replace
as the understanding is themetallb
CR is already created and here you are changing the log level.Display the names of the
speaker
pods:oc get -n metallb-system pods -l component=speaker
$ oc get -n metallb-system pods -l component=speaker
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE speaker-2m9pm 4/4 Running 0 9m19s speaker-7m4qw 3/4 Running 0 19s speaker-szlmx 4/4 Running 0 9m19s
NAME READY STATUS RESTARTS AGE speaker-2m9pm 4/4 Running 0 9m19s speaker-7m4qw 3/4 Running 0 19s speaker-szlmx 4/4 Running 0 9m19s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSpeaker and controller pods are recreated to ensure the updated logging level is applied. The logging level is modified for all the components of MetalLB.
View the
speaker
logs:oc logs -n metallb-system speaker-7m4qw -c speaker
$ oc logs -n metallb-system speaker-7m4qw -c speaker
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the FRR logs:
oc logs -n metallb-system speaker-7m4qw -c frr
$ oc logs -n metallb-system speaker-7m4qw -c frr
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.9.1.1. FRRouting (FRR) log levels Copy linkLink copied to clipboard!
The following table describes the FRR logging levels.
Log level | Description |
---|---|
| Supplies all logging information for all logging levels. |
|
Information that is diagnostically helpful to people. Set to |
| Provides information that always should be logged but under normal circumstances does not require user intervention. This is the default logging level. |
|
Anything that can potentially cause inconsistent |
|
Any error that is fatal to the functioning of |
| Turn off all logging. |
4.9.2. Troubleshooting BGP issues Copy linkLink copied to clipboard!
As a cluster administrator, if you need to troubleshoot BGP configuration issues, you need to run commands in the FRR container.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
Display the names of the
frr-k8s
pods by running the following command:oc -n metallb-system get pods -l component=frr-k8s
$ oc -n metallb-system get pods -l component=frr-k8s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE frr-k8s-thsmw 6/6 Running 0 109m
NAME READY STATUS RESTARTS AGE frr-k8s-thsmw 6/6 Running 0 109m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Display the running configuration for FRR by running the following command:
oc exec -n metallb-system frr-k8s-thsmw -c frr -- vtysh -c "show running-config"
$ oc exec -n metallb-system frr-k8s-thsmw -c frr -- vtysh -c "show running-config"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
router bgp
section indicates the ASN for MetalLB. - 2
- Confirm that a
neighbor <ip-address> remote-as <peer-ASN>
line exists for each BGP peer custom resource that you added. - 3
- If you configured BFD, confirm that the BFD profile is associated with the correct BGP peer and that the BFD profile appears in the command output.
- 4
- Confirm that the
network <ip-address-range>
lines match the IP address ranges that you specified in address pool custom resources that you added.
Display the BGP summary by running the following command:
oc exec -n metallb-system frr-k8s-thsmw -c frr -- vtysh -c "show bgp summary"
$ oc exec -n metallb-system frr-k8s-thsmw -c frr -- vtysh -c "show bgp summary"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Display the BGP peers that received an address pool by running the following command:
oc exec -n metallb-system frr-k8s-thsmw -c frr -- vtysh -c "show bgp ipv4 unicast 203.0.113.200/30"
$ oc exec -n metallb-system frr-k8s-thsmw -c frr -- vtysh -c "show bgp ipv4 unicast 203.0.113.200/30"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
ipv4
withipv6
to display the BGP peers that received an IPv6 address pool. Replace203.0.113.200/30
with an IPv4 or IPv6 IP address range from an address pool.Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Confirm that the output includes an IP address for a BGP peer.
4.9.3. Troubleshooting BFD issues Copy linkLink copied to clipboard!
The Bidirectional Forwarding Detection (BFD) implementation that Red Hat supports uses FRRouting (FRR) in a container in the speaker
pods. The BFD implementation relies on BFD peers also being configured as BGP peers with an established BGP session. As a cluster administrator, if you need to troubleshoot BFD configuration issues, you need to run commands in the FRR container.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
Display the names of the
speaker
pods:oc get -n metallb-system pods -l component=speaker
$ oc get -n metallb-system pods -l component=speaker
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE speaker-66bth 4/4 Running 0 26m speaker-gvfnf 4/4 Running 0 26m ...
NAME READY STATUS RESTARTS AGE speaker-66bth 4/4 Running 0 26m speaker-gvfnf 4/4 Running 0 26m ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Display the BFD peers:
oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c "show bfd peers brief"
$ oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c "show bfd peers brief"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Session count: 2 SessionId LocalAddress PeerAddress Status ========= ============ =========== ====== 3909139637 10.0.1.2 10.0.2.3 up <.>
Session count: 2 SessionId LocalAddress PeerAddress Status ========= ============ =========== ====== 3909139637 10.0.1.2 10.0.2.3 up <.>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow <.> Confirm that the
PeerAddress
column includes each BFD peer. If the output does not list a BFD peer IP address that you expected the output to include, troubleshoot BGP connectivity with the peer. If the status field indicatesdown
, check for connectivity on the links and equipment between the node and the peer. You can determine the node name for the speaker pod with a command likeoc get pods -n metallb-system speaker-66bth -o jsonpath='{.spec.nodeName}'
.
4.9.4. MetalLB metrics for BGP and BFD Copy linkLink copied to clipboard!
OpenShift Container Platform captures the following Prometheus metrics for MetalLB that relate to BGP peers and BFD profiles.
Name | Description |
---|---|
| Counts the number of BFD control packets received from each BFD peer. |
| Counts the number of BFD control packets sent to each BFD peer. |
| Counts the number of BFD echo packets received from each BFD peer. |
| Counts the number of BFD echo packets sent to each BFD. |
|
Counts the number of times the BFD session with a peer entered the |
|
Indicates the connection state with a BFD peer. |
|
Counts the number of times the BFD session with a peer entered the |
| Counts the number of BFD Zebra notifications for each BFD peer. |
Name | Description |
---|---|
| Counts the number of load balancer IP address prefixes that are advertised to BGP peers. The terms prefix and aggregated route have the same meaning. |
|
Indicates the connection state with a BGP peer. |
| Counts the number of BGP update messages sent to each BGP peer. |
| Counts the number of BGP open messages sent to each BGP peer. |
| Counts the number of BGP open messages received from each BGP peer. |
| Counts the number of BGP notification messages sent to each BGP peer. |
| Counts the number of BGP update messages received from each BGP peer. |
| Counts the number of BGP keepalive messages sent to each BGP peer. |
| Counts the number of BGP keepalive messages received from each BGP peer. |
| Counts the number of BGP route refresh messages sent to each BGP peer. |
| Counts the number of total BGP messages sent to each BGP peer. |
| Counts the number of total BGP messages received from each BGP peer. |
Additional resources
- See Querying metrics for all projects with the monitoring dashboard for information about using the monitoring dashboard.
4.9.5. About collecting MetalLB data Copy linkLink copied to clipboard!
You can use the oc adm must-gather
CLI command to collect information about your cluster, your MetalLB configuration, and the MetalLB Operator. The following features and objects are associated with MetalLB and the MetalLB Operator:
- The namespace and child objects that the MetalLB Operator is deployed in
- All MetalLB Operator custom resource definitions (CRDs)
The oc adm must-gather
CLI command collects the following information from FRRouting (FRR) that Red Hat uses to implement BGP and BFD:
-
/etc/frr/frr.conf
-
/etc/frr/frr.log
-
/etc/frr/daemons
configuration file -
/etc/frr/vtysh.conf
The log and configuration files in the preceding list are collected from the frr
container in each speaker
pod.
In addition to the log and configuration files, the oc adm must-gather
CLI command collects the output from the following vtysh
commands:
-
show running-config
-
show bgp ipv4
-
show bgp ipv6
-
show bgp neighbor
-
show bfd peer
No additional configuration is required when you run the oc adm must-gather
CLI command.
Additional resources
Legal Notice
Copy linkLink copied to clipboard!
Copyright © 2025 Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.