Questo contenuto non è disponibile nella lingua selezionata.

Chapter 26. Configuring Routes


26.1. Route configuration

26.1.1. Creating an HTTP-based route

A route allows you to host your application at a public URL. It can either be secure or unsecured, depending on the network security configuration of your application. An HTTP-based route is an unsecured route that uses the basic HTTP routing protocol and exposes a service on an unsecured application port.

The following procedure describes how to create a simple HTTP-based route to a web application, using the hello-openshift application as an example.

Prerequisites

  • You installed the OpenShift CLI (oc).
  • You are logged in as an administrator.
  • You have a web application that exposes a port and a TCP endpoint listening for traffic on the port.

Procedure

  1. Create a project called hello-openshift by running the following command:

    $ oc new-project hello-openshift
  2. Create a pod in the project by running the following command:

    $ oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json
  3. Create a service called hello-openshift by running the following command:

    $ oc expose pod/hello-openshift
  4. Create an unsecured route to the hello-openshift application by running the following command:

    $ oc expose svc hello-openshift

Verification

  • To verify that the route resource that you created, run the following command:

    $ oc get routes -o yaml <name of resource> 1
    1
    In this example, the route is named hello-openshift.

Sample YAML definition of the created unsecured route:

apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: hello-openshift
spec:
  host: hello-openshift-hello-openshift.<Ingress_Domain> 1
  port:
    targetPort: 8080 2
  to:
    kind: Service
    name: hello-openshift

1
<Ingress_Domain> is the default ingress domain name. The ingresses.config/cluster object is created during the installation and cannot be changed. If you want to specify a different domain, you can specify an alternative cluster domain using the appsDomain option.
2
targetPort is the target port on pods that is selected by the service that this route points to.
Note

To display your default ingress domain, run the following command:

$ oc get ingresses.config/cluster -o jsonpath={.spec.domain}

26.1.2. Creating a route for Ingress Controller sharding

A route allows you to host your application at a URL. In this case, the hostname is not set and the route uses a subdomain instead. When you specify a subdomain, you automatically use the domain of the Ingress Controller that exposes the route. For situations where a route is exposed by multiple Ingress Controllers, the route is hosted at multiple URLs.

The following procedure describes how to create a route for Ingress Controller sharding, using the hello-openshift application as an example.

Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another.

Prerequisites

  • You installed the OpenShift CLI (oc).
  • You are logged in as a project administrator.
  • You have a web application that exposes a port and an HTTP or TLS endpoint listening for traffic on the port.
  • You have configured the Ingress Controller for sharding.

Procedure

  1. Create a project called hello-openshift by running the following command:

    $ oc new-project hello-openshift
  2. Create a pod in the project by running the following command:

    $ oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json
  3. Create a service called hello-openshift by running the following command:

    $ oc expose pod/hello-openshift
  4. Create a route definition called hello-openshift-route.yaml:

    YAML definition of the created route for sharding:

    apiVersion: route.openshift.io/v1
    kind: Route
    metadata:
      labels:
        type: sharded 1
      name: hello-openshift-edge
      namespace: hello-openshift
    spec:
      subdomain: hello-openshift 2
      tls:
        termination: edge
      to:
        kind: Service
        name: hello-openshift

    1
    Both the label key and its corresponding label value must match the ones specified in the Ingress Controller. In this example, the Ingress Controller has the label key and value type: sharded.
    2
    The route will be exposed using the value of the subdomain field. When you specify the subdomain field, you must leave the hostname unset. If you specify both the host and subdomain fields, then the route will use the value of the host field, and ignore the subdomain field.
  5. Use hello-openshift-route.yaml to create a route to the hello-openshift application by running the following command:

    $ oc -n hello-openshift create -f hello-openshift-route.yaml

Verification

  • Get the status of the route with the following command:

    $ oc -n hello-openshift get routes/hello-openshift-edge -o yaml

    The resulting Route resource should look similar to the following:

    Example output

    apiVersion: route.openshift.io/v1
    kind: Route
    metadata:
      labels:
        type: sharded
      name: hello-openshift-edge
      namespace: hello-openshift
    spec:
      subdomain: hello-openshift
      tls:
        termination: edge
      to:
        kind: Service
        name: hello-openshift
    status:
      ingress:
      - host: hello-openshift.<apps-sharded.basedomain.example.net> 1
        routerCanonicalHostname: router-sharded.<apps-sharded.basedomain.example.net> 2
        routerName: sharded 3

    1
    The hostname the Ingress Controller, or router, uses to expose the route. The value of the host field is automatically determined by the Ingress Controller, and uses its domain. In this example, the domain of the Ingress Controller is <apps-sharded.basedomain.example.net>.
    2
    The hostname of the Ingress Controller.
    3
    The name of the Ingress Controller. In this example, the Ingress Controller has the name sharded.

26.1.3. Configuring route timeouts

You can configure the default timeouts for an existing route when you have services in need of a low timeout, which is required for Service Level Availability (SLA) purposes, or a high timeout, for cases with a slow back end.

Prerequisites

  • You need a deployed Ingress Controller on a running cluster.

Procedure

  1. Using the oc annotate command, add the timeout to the route:

    $ oc annotate route <route_name> \
        --overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit> 1
    1
    Supported time units are microseconds (us), milliseconds (ms), seconds (s), minutes (m), hours (h), or days (d).

    The following example sets a timeout of two seconds on a route named myroute:

    $ oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s

26.1.4. HTTP Strict Transport Security

HTTP Strict Transport Security (HSTS) policy is a security enhancement, which signals to the browser client that only HTTPS traffic is allowed on the route host. HSTS also optimizes web traffic by signaling HTTPS transport is required, without using HTTP redirects. HSTS is useful for speeding up interactions with websites.

When HSTS policy is enforced, HSTS adds a Strict Transport Security header to HTTP and HTTPS responses from the site. You can use the insecureEdgeTerminationPolicy value in a route to redirect HTTP to HTTPS. When HSTS is enforced, the client changes all requests from the HTTP URL to HTTPS before the request is sent, eliminating the need for a redirect.

Cluster administrators can configure HSTS to do the following:

  • Enable HSTS per-route
  • Disable HSTS per-route
  • Enforce HSTS per-domain, for a set of domains, or use namespace labels in combination with domains
Important

HSTS works only with secure routes, either edge-terminated or re-encrypt. The configuration is ineffective on HTTP or passthrough routes.

26.1.4.1. Enabling HTTP Strict Transport Security per-route

HTTP strict transport security (HSTS) is implemented in the HAProxy template and applied to edge and re-encrypt routes that have the haproxy.router.openshift.io/hsts_header annotation.

Prerequisites

  • You are logged in to the cluster with a user with administrator privileges for the project.
  • You installed the oc CLI.

Procedure

  • To enable HSTS on a route, add the haproxy.router.openshift.io/hsts_header value to the edge-terminated or re-encrypt route. You can use the oc annotate tool to do this by running the following command:

    $ oc annotate route <route_name> -n <namespace> --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=31536000;\ 1
    includeSubDomains;preload"
    1
    In this example, the maximum age is set to 31536000 ms, which is approximately eight and a half hours.
    Note

    In this example, the equal sign (=) is in quotes. This is required to properly execute the annotate command.

    Example route configured with an annotation

    apiVersion: route.openshift.io/v1
    kind: Route
    metadata:
      annotations:
        haproxy.router.openshift.io/hsts_header: max-age=31536000;includeSubDomains;preload 1 2 3
    ...
    spec:
      host: def.abc.com
      tls:
        termination: "reencrypt"
        ...
      wildcardPolicy: "Subdomain"

    1
    Required. max-age measures the length of time, in seconds, that the HSTS policy is in effect. If set to 0, it negates the policy.
    2
    Optional. When included, includeSubDomains tells the client that all subdomains of the host must have the same HSTS policy as the host.
    3
    Optional. When max-age is greater than 0, you can add preload in haproxy.router.openshift.io/hsts_header to allow external services to include this site in their HSTS preload lists. For example, sites such as Google can construct a list of sites that have preload set. Browsers can then use these lists to determine which sites they can communicate with over HTTPS, even before they have interacted with the site. Without preload set, browsers must have interacted with the site over HTTPS, at least once, to get the header.

26.1.4.2. Disabling HTTP Strict Transport Security per-route

To disable HTTP strict transport security (HSTS) per-route, you can set the max-age value in the route annotation to 0.

Prerequisites

  • You are logged in to the cluster with a user with administrator privileges for the project.
  • You installed the oc CLI.

Procedure

  • To disable HSTS, set the max-age value in the route annotation to 0, by entering the following command:

    $ oc annotate route <route_name> -n <namespace> --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=0"
    Tip

    You can alternatively apply the following YAML to create the config map:

    Example of disabling HSTS per-route

    metadata:
      annotations:
        haproxy.router.openshift.io/hsts_header: max-age=0

  • To disable HSTS for every route in a namespace, enter the following command:

    $ oc annotate route --all -n <namespace> --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=0"

Verification

  1. To query the annotation for all routes, enter the following command:

    $ oc get route  --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{$a := index .metadata.annotations "haproxy.router.openshift.io/hsts_header"}}{{$n := .metadata.name}}{{with $a}}Name: {{$n}} HSTS: {{$a}}{{"\n"}}{{else}}{{""}}{{end}}{{end}}{{end}}'

    Example output

    Name: routename HSTS: max-age=0

26.1.4.3. Enforcing HTTP Strict Transport Security per-domain

To enforce HTTP Strict Transport Security (HSTS) per-domain for secure routes, add a requiredHSTSPolicies record to the Ingress spec to capture the configuration of the HSTS policy.

If you configure a requiredHSTSPolicy to enforce HSTS, then any newly created route must be configured with a compliant HSTS policy annotation.

Note

To handle upgraded clusters with non-compliant HSTS routes, you can update the manifests at the source and apply the updates.

Note

You cannot use oc expose route or oc create route commands to add a route in a domain that enforces HSTS, because the API for these commands does not accept annotations.

Important

HSTS cannot be applied to insecure, or non-TLS routes, even if HSTS is requested for all routes globally.

Prerequisites

  • You are logged in to the cluster with a user with administrator privileges for the project.
  • You installed the oc CLI.

Procedure

  1. Edit the Ingress config file:

    $ oc edit ingresses.config.openshift.io/cluster

    Example HSTS policy

    apiVersion: config.openshift.io/v1
    kind: Ingress
    metadata:
      name: cluster
    spec:
      domain: 'hello-openshift-default.apps.username.devcluster.openshift.com'
      requiredHSTSPolicies: 1
      - domainPatterns: 2
        - '*hello-openshift-default.apps.username.devcluster.openshift.com'
        - '*hello-openshift-default2.apps.username.devcluster.openshift.com'
        namespaceSelector: 3
          matchLabels:
            myPolicy: strict
        maxAge: 4
          smallestMaxAge: 1
          largestMaxAge: 31536000
        preloadPolicy: RequirePreload 5
        includeSubDomainsPolicy: RequireIncludeSubDomains 6
      - domainPatterns: 7
        - 'abc.example.com'
        - '*xyz.example.com'
        namespaceSelector:
          matchLabels: {}
        maxAge: {}
        preloadPolicy: NoOpinion
        includeSubDomainsPolicy: RequireNoIncludeSubDomains

    1
    Required. requiredHSTSPolicies are validated in order, and the first matching domainPatterns applies.
    2 7
    Required. You must specify at least one domainPatterns hostname. Any number of domains can be listed. You can include multiple sections of enforcing options for different domainPatterns.
    3
    Optional. If you include namespaceSelector, it must match the labels of the project where the routes reside, to enforce the set HSTS policy on the routes. Routes that only match the namespaceSelector and not the domainPatterns are not validated.
    4
    Required. max-age measures the length of time, in seconds, that the HSTS policy is in effect. This policy setting allows for a smallest and largest max-age to be enforced.
    • The largestMaxAge value must be between 0 and 2147483647. It can be left unspecified, which means no upper limit is enforced.
    • The smallestMaxAge value must be between 0 and 2147483647. Enter 0 to disable HSTS for troubleshooting, otherwise enter 1 if you never want HSTS to be disabled. It can be left unspecified, which means no lower limit is enforced.
    5
    Optional. Including preload in haproxy.router.openshift.io/hsts_header allows external services to include this site in their HSTS preload lists. Browsers can then use these lists to determine which sites they can communicate with over HTTPS, before they have interacted with the site. Without preload set, browsers need to interact at least once with the site to get the header. preload can be set with one of the following:
    • RequirePreload: preload is required by the RequiredHSTSPolicy.
    • RequireNoPreload: preload is forbidden by the RequiredHSTSPolicy.
    • NoOpinion: preload does not matter to the RequiredHSTSPolicy.
    6
    Optional. includeSubDomainsPolicy can be set with one of the following:
    • RequireIncludeSubDomains: includeSubDomains is required by the RequiredHSTSPolicy.
    • RequireNoIncludeSubDomains: includeSubDomains is forbidden by the RequiredHSTSPolicy.
    • NoOpinion: includeSubDomains does not matter to the RequiredHSTSPolicy.
  2. You can apply HSTS to all routes in the cluster or in a particular namespace by entering the oc annotate command.

    • To apply HSTS to all routes in the cluster, enter the oc annotate command. For example:

      $ oc annotate route --all --all-namespaces --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=31536000"
    • To apply HSTS to all routes in a particular namespace, enter the oc annotate command. For example:

      $ oc annotate route --all -n my-namespace --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=31536000"

Verification

You can review the HSTS policy you configured. For example:

  • To review the maxAge set for required HSTS policies, enter the following command:

    $ oc get clusteroperator/ingress -n openshift-ingress-operator -o jsonpath='{range .spec.requiredHSTSPolicies[*]}{.spec.requiredHSTSPolicies.maxAgePolicy.largestMaxAge}{"\n"}{end}'
  • To review the HSTS annotations on all routes, enter the following command:

    $ oc get route  --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{$a := index .metadata.annotations "haproxy.router.openshift.io/hsts_header"}}{{$n := .metadata.name}}{{with $a}}Name: {{$n}} HSTS: {{$a}}{{"\n"}}{{else}}{{""}}{{end}}{{end}}{{end}}'

    Example output

    Name: <_routename_> HSTS: max-age=31536000;preload;includeSubDomains

26.1.5. Throughput issue troubleshooting methods

Sometimes applications deployed by using OpenShift Container Platform can cause network throughput issues, such as unusually high latency between specific services.

If pod logs do not reveal any cause of the problem, use the following methods to analyze performance issues:

  • Use a packet analyzer, such as ping or tcpdump to analyze traffic between a pod and its node.

    For example, run the tcpdump tool on each pod while reproducing the behavior that led to the issue. Review the captures on both sides to compare send and receive timestamps to analyze the latency of traffic to and from a pod. Latency can occur in OpenShift Container Platform if a node interface is overloaded with traffic from other pods, storage devices, or the data plane.

    $ tcpdump -s 0 -i any -w /tmp/dump.pcap host <podip 1> && host <podip 2> 1
    1
    podip is the IP address for the pod. Run the oc get pod <pod_name> -o wide command to get the IP address of a pod.

    The tcpdump command generates a file at /tmp/dump.pcap containing all traffic between these two pods. You can run the analyzer shortly before the issue is reproduced and stop the analyzer shortly after the issue is finished reproducing to minimize the size of the file. You can also run a packet analyzer between the nodes (eliminating the SDN from the equation) with:

    $ tcpdump -s 0 -i any -w /tmp/dump.pcap port 4789
  • Use a bandwidth measuring tool, such as iperf, to measure streaming throughput and UDP throughput. Locate any bottlenecks by running the tool from the pods first, and then running it from the nodes.

  • In some cases, the cluster may mark the node with the router pod as unhealthy due to latency issues. Use worker latency profiles to adjust the frequency that the cluster waits for a status update from the node before taking action.
  • If your cluster has designated lower-latency and higher-latency nodes, configure the spec.nodePlacement field in the Ingress Controller to control the placement of the router pod.

26.1.6. Using cookies to keep route statefulness

OpenShift Container Platform provides sticky sessions, which enables stateful application traffic by ensuring all traffic hits the same endpoint. However, if the endpoint pod terminates, whether through restart, scaling, or a change in configuration, this statefulness can disappear.

OpenShift Container Platform can use cookies to configure session persistence. The Ingress controller selects an endpoint to handle any user requests, and creates a cookie for the session. The cookie is passed back in the response to the request and the user sends the cookie back with the next request in the session. The cookie tells the Ingress Controller which endpoint is handling the session, ensuring that client requests use the cookie so that they are routed to the same pod.

Note

Cookies cannot be set on passthrough routes, because the HTTP traffic cannot be seen. Instead, a number is calculated based on the source IP address, which determines the backend.

If backends change, the traffic can be directed to the wrong server, making it less sticky. If you are using a load balancer, which hides source IP, the same number is set for all connections and traffic is sent to the same pod.

26.1.7. Path-based routes

Path-based routes specify a path component that can be compared against a URL, which requires that the traffic for the route be HTTP based. Thus, multiple routes can be served using the same hostname, each with a different path. Routers should match routes based on the most specific path to the least.

The following table shows example routes and their accessibility:

Table 26.1. Route availability
RouteWhen Compared toAccessible

www.example.com/test

www.example.com/test

Yes

www.example.com

No

www.example.com/test and www.example.com

www.example.com/test

Yes

www.example.com

Yes

www.example.com

www.example.com/text

Yes (Matched by the host, not the route)

www.example.com

Yes

An unsecured route with a path

apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: route-unsecured
spec:
  host: www.example.com
  path: "/test" 1
  to:
    kind: Service
    name: service-name

1
The path is the only added attribute for a path-based route.
Note

Path-based routing is not available when using passthrough TLS, as the router does not terminate TLS in that case and cannot read the contents of the request.

26.1.8. HTTP header configuration

OpenShift Container Platform provides different methods for working with HTTP headers. When setting or deleting headers, you can use specific fields in the Ingress Controller or an individual route to modify request and response headers. You can also set certain headers by using route annotations. The various ways of configuring headers can present challenges when working together.

Note

You can only set or delete headers within an IngressController or Route CR, you cannot append them. If an HTTP header is set with a value, that value must be complete and not require appending in the future. In situations where it makes sense to append a header, such as the X-Forwarded-For header, use the spec.httpHeaders.forwardedHeaderPolicy field, instead of spec.httpHeaders.actions.

26.1.8.1. Order of precedence

When the same HTTP header is modified both in the Ingress Controller and in a route, HAProxy prioritizes the actions in certain ways depending on whether it is a request or response header.

  • For HTTP response headers, actions specified in the Ingress Controller are executed after the actions specified in a route. This means that the actions specified in the Ingress Controller take precedence.
  • For HTTP request headers, actions specified in a route are executed after the actions specified in the Ingress Controller. This means that the actions specified in the route take precedence.

For example, a cluster administrator sets the X-Frame-Options response header with the value DENY in the Ingress Controller using the following configuration:

Example IngressController spec

apiVersion: operator.openshift.io/v1
kind: IngressController
# ...
spec:
  httpHeaders:
    actions:
      response:
      - name: X-Frame-Options
        action:
          type: Set
          set:
            value: DENY

A route owner sets the same response header that the cluster administrator set in the Ingress Controller, but with the value SAMEORIGIN using the following configuration:

Example Route spec

apiVersion: route.openshift.io/v1
kind: Route
# ...
spec:
  httpHeaders:
    actions:
      response:
      - name: X-Frame-Options
        action:
          type: Set
          set:
            value: SAMEORIGIN

When both the IngressController spec and Route spec are configuring the X-Frame-Options header, then the value set for this header at the global level in the Ingress Controller will take precedence, even if a specific route allows frames.

This prioritzation occurs because the haproxy.config file uses the following logic, where the Ingress Controller is considered the front end and individual routes are considered the back end. The header value DENY applied to the front end configurations overrides the same header with the value SAMEORIGIN that is set in the back end:

frontend public
  http-response set-header X-Frame-Options 'DENY'

frontend fe_sni
  http-response set-header X-Frame-Options 'DENY'

frontend fe_no_sni
  http-response set-header X-Frame-Options 'DENY'

backend be_secure:openshift-monitoring:alertmanager-main
  http-response set-header X-Frame-Options 'SAMEORIGIN'

Additionally, any actions defined in either the Ingress Controller or a route override values set using route annotations.

26.1.8.2. Special case headers

The following headers are either prevented entirely from being set or deleted, or allowed under specific circumstances:

Table 26.2. Special case header configuration options
Header nameConfigurable using IngressController specConfigurable using Route specReason for disallowmentConfigurable using another method

proxy

No

No

The proxy HTTP request header can be used to exploit vulnerable CGI applications by injecting the header value into the HTTP_PROXY environment variable. The proxy HTTP request header is also non-standard and prone to error during configuration.

No

host

No

Yes

When the host HTTP request header is set using the IngressController CR, HAProxy can fail when looking up the correct route.

No

strict-transport-security

No

No

The strict-transport-security HTTP response header is already handled using route annotations and does not need a separate implementation.

Yes: the haproxy.router.openshift.io/hsts_header route annotation

cookie and set-cookie

No

No

The cookies that HAProxy sets are used for session tracking to map client connections to particular back-end servers. Allowing these headers to be set could interfere with HAProxy’s session affinity and restrict HAProxy’s ownership of a cookie.

Yes:

  • the haproxy.router.openshift.io/disable_cookie route annotation
  • the haproxy.router.openshift.io/cookie_name route annotation

26.1.9. Setting or deleting HTTP request and response headers in a route

You can set or delete certain HTTP request and response headers for compliance purposes or other reasons. You can set or delete these headers either for all routes served by an Ingress Controller or for specific routes.

For example, you might want to enable a web application to serve content in alternate locations for specific routes if that content is written in multiple languages, even if there is a default global location specified by the Ingress Controller serving the routes.

The following procedure creates a route that sets the Content-Location HTTP request header so that the URL associated with the application, https://app.example.com, directs to the location https://app.example.com/lang/en-us. Directing application traffic to this location means that anyone using that specific route is accessing web content written in American English.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You are logged into an OpenShift Container Platform cluster as a project administrator.
  • You have a web application that exposes a port and an HTTP or TLS endpoint listening for traffic on the port.

Procedure

  1. Create a route definition and save it in a file called app-example-route.yaml:

    YAML definition of the created route with HTTP header directives

    apiVersion: route.openshift.io/v1
    kind: Route
    # ...
    spec:
      host: app.example.com
      tls:
        termination: edge
      to:
        kind: Service
        name: app-example
      httpHeaders:
        actions: 1
          response: 2
          - name: Content-Location 3
            action:
              type: Set 4
              set:
                value: /lang/en-us 5

    1
    The list of actions you want to perform on the HTTP headers.
    2
    The type of header you want to change. In this case, a response header.
    3
    The name of the header you want to change. For a list of available headers you can set or delete, see HTTP header configuration.
    4
    The type of action being taken on the header. This field can have the value Set or Delete.
    5
    When setting HTTP headers, you must provide a value. The value can be a string from a list of available directives for that header, for example DENY, or it can be a dynamic value that will be interpreted using HAProxy’s dynamic value syntax. In this case, the value is set to the relative location of the content.
  2. Create a route to your existing web application using the newly created route definition:

    $ oc -n app-example create -f app-example-route.yaml

For HTTP request headers, the actions specified in the route definitions are executed after any actions performed on HTTP request headers in the Ingress Controller. This means that any values set for those request headers in a route will take precedence over the ones set in the Ingress Controller. For more information on the processing order of HTTP headers, see HTTP header configuration.

26.1.10. Route-specific annotations

The Ingress Controller can set the default options for all the routes it exposes. An individual route can override some of these defaults by providing specific configurations in its annotations. Red Hat does not support adding a route annotation to an operator-managed route.

Important

To create a whitelist with multiple source IPs or subnets, use a space-delimited list. Any other delimiter type causes the list to be ignored without a warning or error message.

Table 26.3. Route annotations
VariableDescriptionEnvironment variable used as default

haproxy.router.openshift.io/balance

Sets the load-balancing algorithm. Available options are random, source, roundrobin, and leastconn. The default value is source for TLS passthrough routes. For all other routes, the default is random.

ROUTER_TCP_BALANCE_SCHEME for passthrough routes. Otherwise, use ROUTER_LOAD_BALANCE_ALGORITHM.

haproxy.router.openshift.io/disable_cookies

Disables the use of cookies to track related connections. If set to 'true' or 'TRUE', the balance algorithm is used to choose which back-end serves connections for each incoming HTTP request.

 

router.openshift.io/cookie_name

Specifies an optional cookie to use for this route. The name must consist of any combination of upper and lower case letters, digits, "_", and "-". The default is the hashed internal key name for the route.

 

haproxy.router.openshift.io/pod-concurrent-connections

Sets the maximum number of connections that are allowed to a backing pod from a router.
Note: If there are multiple pods, each can have this many connections. If you have multiple routers, there is no coordination among them, each may connect this many times. If not set, or set to 0, there is no limit.

 

haproxy.router.openshift.io/rate-limit-connections

Setting 'true' or 'TRUE' enables rate limiting functionality which is implemented through stick-tables on the specific backend per route.
Note: Using this annotation provides basic protection against denial-of-service attacks.

 

haproxy.router.openshift.io/rate-limit-connections.concurrent-tcp

Limits the number of concurrent TCP connections made through the same source IP address. It accepts a numeric value.
Note: Using this annotation provides basic protection against denial-of-service attacks.

 

haproxy.router.openshift.io/rate-limit-connections.rate-http

Limits the rate at which a client with the same source IP address can make HTTP requests. It accepts a numeric value.
Note: Using this annotation provides basic protection against denial-of-service attacks.

 

haproxy.router.openshift.io/rate-limit-connections.rate-tcp

Limits the rate at which a client with the same source IP address can make TCP connections. It accepts a numeric value.
Note: Using this annotation provides basic protection against denial-of-service attacks.

 

haproxy.router.openshift.io/timeout

Sets a server-side timeout for the route. (TimeUnits)

ROUTER_DEFAULT_SERVER_TIMEOUT

haproxy.router.openshift.io/timeout-tunnel

This timeout applies to a tunnel connection, for example, WebSocket over cleartext, edge, reencrypt, or passthrough routes. With cleartext, edge, or reencrypt route types, this annotation is applied as a timeout tunnel with the existing timeout value. For the passthrough route types, the annotation takes precedence over any existing timeout value set.

ROUTER_DEFAULT_TUNNEL_TIMEOUT

ingresses.config/cluster ingress.operator.openshift.io/hard-stop-after

You can set either an IngressController or the ingress config . This annotation redeploys the router and configures the HA proxy to emit the haproxy hard-stop-after global option, which defines the maximum time allowed to perform a clean soft-stop.

ROUTER_HARD_STOP_AFTER

router.openshift.io/haproxy.health.check.interval

Sets the interval for the back-end health checks. (TimeUnits)

ROUTER_BACKEND_CHECK_INTERVAL

haproxy.router.openshift.io/ip_whitelist

Sets an allowlist for the route. The allowlist is a space-separated list of IP addresses and CIDR ranges for the approved source addresses. Requests from IP addresses that are not in the allowlist are dropped.

The maximum number of IP addresses and CIDR ranges directly visible in the haproxy.config file is 61. [1]

 

haproxy.router.openshift.io/hsts_header

Sets a Strict-Transport-Security header for the edge terminated or re-encrypt route.

 

haproxy.router.openshift.io/rewrite-target

Sets the rewrite path of the request on the backend.

 

router.openshift.io/cookie-same-site

Sets a value to restrict cookies. The values are:

Lax: the browser does not send cookies on cross-site requests, but does send cookies when users navigate to the origin site from an external site. This is the default browser behavior when the SameSite value is not specified.

Strict: the browser sends cookies only for same-site requests.

None: the browser sends cookies for both cross-site and same-site requests.

This value is applicable to re-encrypt and edge routes only. For more information, see the SameSite cookies documentation.

 

haproxy.router.openshift.io/set-forwarded-headers

Sets the policy for handling the Forwarded and X-Forwarded-For HTTP headers per route. The values are:

append: appends the header, preserving any existing header. This is the default value.

replace: sets the header, removing any existing header.

never: never sets the header, but preserves any existing header.

if-none: sets the header if it is not already set.

ROUTER_SET_FORWARDED_HEADERS

  1. If the number of IP addresses and CIDR ranges in an allowlist exceeds 61, they are written into a separate file that is then referenced from haproxy.config. This file is stored in the var/lib/haproxy/router/whitelists folder.

    Note

    To ensure that the addresses are written to the allowlist, check that the full list of CIDR ranges are listed in the Ingress Controller configuration file. The etcd object size limit restricts how large a route annotation can be. Because of this, it creates a threshold for the maximum number of IP addresses and CIDR ranges that you can include in an allowlist.

Note

Environment variables cannot be edited.

Router timeout variables

TimeUnits are represented by a number followed by the unit: us *(microseconds), ms (milliseconds, default), s (seconds), m (minutes), h *(hours), d (days).

The regular expression is: [1-9][0-9]*(us\|ms\|s\|m\|h\|d).

VariableDefaultDescription

ROUTER_BACKEND_CHECK_INTERVAL

5000ms

Length of time between subsequent liveness checks on back ends.

ROUTER_CLIENT_FIN_TIMEOUT

1s

Controls the TCP FIN timeout period for the client connecting to the route. If the FIN sent to close the connection does not answer within the given time, HAProxy closes the connection. This is harmless if set to a low value and uses fewer resources on the router.

ROUTER_DEFAULT_CLIENT_TIMEOUT

30s

Length of time that a client has to acknowledge or send data.

ROUTER_DEFAULT_CONNECT_TIMEOUT

5s

The maximum connection time.

ROUTER_DEFAULT_SERVER_FIN_TIMEOUT

1s

Controls the TCP FIN timeout from the router to the pod backing the route.

ROUTER_DEFAULT_SERVER_TIMEOUT

30s

Length of time that a server has to acknowledge or send data.

ROUTER_DEFAULT_TUNNEL_TIMEOUT

1h

Length of time for TCP or WebSocket connections to remain open. This timeout period resets whenever HAProxy reloads.

ROUTER_SLOWLORIS_HTTP_KEEPALIVE

300s

Set the maximum time to wait for a new HTTP request to appear. If this is set too low, it can cause problems with browsers and applications not expecting a small keepalive value.

Some effective timeout values can be the sum of certain variables, rather than the specific expected timeout. For example, ROUTER_SLOWLORIS_HTTP_KEEPALIVE adjusts timeout http-keep-alive. It is set to 300s by default, but HAProxy also waits on tcp-request inspect-delay, which is set to 5s. In this case, the overall timeout would be 300s plus 5s.

ROUTER_SLOWLORIS_TIMEOUT

10s

Length of time the transmission of an HTTP request can take.

RELOAD_INTERVAL

5s

Allows the minimum frequency for the router to reload and accept new changes.

ROUTER_METRICS_HAPROXY_TIMEOUT

5s

Timeout for the gathering of HAProxy metrics.

A route setting custom timeout

apiVersion: route.openshift.io/v1
kind: Route
metadata:
  annotations:
    haproxy.router.openshift.io/timeout: 5500ms 1
...

1
Specifies the new timeout with HAProxy supported units (us, ms, s, m, h, d). If the unit is not provided, ms is the default.
Note

Setting a server-side timeout value for passthrough routes too low can cause WebSocket connections to timeout frequently on that route.

A route that allows only one specific IP address

metadata:
  annotations:
    haproxy.router.openshift.io/ip_whitelist: 192.168.1.10

A route that allows several IP addresses

metadata:
  annotations:
    haproxy.router.openshift.io/ip_whitelist: 192.168.1.10 192.168.1.11 192.168.1.12

A route that allows an IP address CIDR network

metadata:
  annotations:
    haproxy.router.openshift.io/ip_whitelist: 192.168.1.0/24

A route that allows both IP an address and IP address CIDR networks

metadata:
  annotations:
    haproxy.router.openshift.io/ip_whitelist: 180.5.61.153 192.168.1.0/24 10.0.0.0/8

A route specifying a rewrite target

apiVersion: route.openshift.io/v1
kind: Route
metadata:
  annotations:
    haproxy.router.openshift.io/rewrite-target: / 1
...

1
Sets / as rewrite path of the request on the backend.

Setting the haproxy.router.openshift.io/rewrite-target annotation on a route specifies that the Ingress Controller should rewrite paths in HTTP requests using this route before forwarding the requests to the backend application. The part of the request path that matches the path specified in spec.path is replaced with the rewrite target specified in the annotation.

The following table provides examples of the path rewriting behavior for various combinations of spec.path, request path, and rewrite target.

Table 26.4. rewrite-target examples:
Route.spec.pathRequest pathRewrite targetForwarded request path

/foo

/foo

/

/

/foo

/foo/

/

/

/foo

/foo/bar

/

/bar

/foo

/foo/bar/

/

/bar/

/foo

/foo

/bar

/bar

/foo

/foo/

/bar

/bar/

/foo

/foo/bar

/baz

/baz/bar

/foo

/foo/bar/

/baz

/baz/bar/

/foo/

/foo

/

N/A (request path does not match route path)

/foo/

/foo/

/

/

/foo/

/foo/bar

/

/bar

26.1.11. Configuring the route admission policy

Administrators and application developers can run applications in multiple namespaces with the same domain name. This is for organizations where multiple teams develop microservices that are exposed on the same hostname.

Warning

Allowing claims across namespaces should only be enabled for clusters with trust between namespaces, otherwise a malicious user could take over a hostname. For this reason, the default admission policy disallows hostname claims across namespaces.

Prerequisites

  • Cluster administrator privileges.

Procedure

  • Edit the .spec.routeAdmission field of the ingresscontroller resource variable using the following command:

    $ oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{"spec":{"routeAdmission":{"namespaceOwnership":"InterNamespaceAllowed"}}}' --type=merge

    Sample Ingress Controller configuration

    spec:
      routeAdmission:
        namespaceOwnership: InterNamespaceAllowed
    ...

    Tip

    You can alternatively apply the following YAML to configure the route admission policy:

    apiVersion: operator.openshift.io/v1
    kind: IngressController
    metadata:
      name: default
      namespace: openshift-ingress-operator
    spec:
      routeAdmission:
        namespaceOwnership: InterNamespaceAllowed

26.1.12. Creating a route through an Ingress object

Some ecosystem components have an integration with Ingress resources but not with route resources. To cover this case, OpenShift Container Platform automatically creates managed route objects when an Ingress object is created. These route objects are deleted when the corresponding Ingress objects are deleted.

Procedure

  1. Define an Ingress object in the OpenShift Container Platform console or by entering the oc create command:

    YAML Definition of an Ingress

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: frontend
      annotations:
        route.openshift.io/termination: "reencrypt" 1
        route.openshift.io/destination-ca-certificate-secret: secret-ca-cert 2
    spec:
      rules:
      - host: www.example.com 3
        http:
          paths:
          - backend:
              service:
                name: frontend
                port:
                  number: 443
            path: /
            pathType: Prefix
      tls:
      - hosts:
        - www.example.com
        secretName: example-com-tls-certificate

    1
    The route.openshift.io/termination annotation can be used to configure the spec.tls.termination field of the Route as Ingress has no field for this. The accepted values are edge, passthrough and reencrypt. All other values are silently ignored. When the annotation value is unset, edge is the default route. The TLS certificate details must be defined in the template file to implement the default edge route.
    3
    When working with an Ingress object, you must specify an explicit hostname, unlike when working with routes. You can use the <host_name>.<cluster_ingress_domain> syntax, for example apps.openshiftdemos.com, to take advantage of the *.<cluster_ingress_domain> wildcard DNS record and serving certificate for the cluster. Otherwise, you must ensure that there is a DNS record for the chosen hostname.
    1. If you specify the passthrough value in the route.openshift.io/termination annotation, set path to '' and pathType to ImplementationSpecific in the spec:

        spec:
          rules:
          - host: www.example.com
            http:
              paths:
              - path: ''
                pathType: ImplementationSpecific
                backend:
                  service:
                    name: frontend
                    port:
                      number: 443
      $ oc apply -f ingress.yaml
    2
    The route.openshift.io/destination-ca-certificate-secret can be used on an Ingress object to define a route with a custom destination certificate (CA). The annotation references a kubernetes secret, secret-ca-cert that will be inserted into the generated route.
    1. To specify a route object with a destination CA from an ingress object, you must create a kubernetes.io/tls or Opaque type secret with a certificate in PEM-encoded format in the data.tls.crt specifier of the secret.
  2. List your routes:

    $ oc get routes

    The result includes an autogenerated route whose name starts with frontend-:

    NAME             HOST/PORT         PATH    SERVICES    PORT    TERMINATION          WILDCARD
    frontend-gnztq   www.example.com           frontend    443     reencrypt/Redirect   None

    If you inspect this route, it looks this:

    YAML Definition of an autogenerated route

    apiVersion: route.openshift.io/v1
    kind: Route
    metadata:
      name: frontend-gnztq
      ownerReferences:
      - apiVersion: networking.k8s.io/v1
        controller: true
        kind: Ingress
        name: frontend
        uid: 4e6c59cc-704d-4f44-b390-617d879033b6
    spec:
      host: www.example.com
      path: /
      port:
        targetPort: https
      tls:
        certificate: |
          -----BEGIN CERTIFICATE-----
          [...]
          -----END CERTIFICATE-----
        insecureEdgeTerminationPolicy: Redirect
        key: |
          -----BEGIN RSA PRIVATE KEY-----
          [...]
          -----END RSA PRIVATE KEY-----
        termination: reencrypt
        destinationCACertificate: |
          -----BEGIN CERTIFICATE-----
          [...]
          -----END CERTIFICATE-----
      to:
        kind: Service
        name: frontend

26.1.13. Creating a route using the default certificate through an Ingress object

If you create an Ingress object without specifying any TLS configuration, OpenShift Container Platform generates an insecure route. To create an Ingress object that generates a secure, edge-terminated route using the default ingress certificate, you can specify an empty TLS configuration as follows.

Prerequisites

  • You have a service that you want to expose.
  • You have access to the OpenShift CLI (oc).

Procedure

  1. Create a YAML file for the Ingress object. In this example, the file is called example-ingress.yaml:

    YAML definition of an Ingress object

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: frontend
      ...
    spec:
      rules:
        ...
      tls:
      - {} 1

    1
    Use this exact syntax to specify TLS without specifying a custom certificate.
  2. Create the Ingress object by running the following command:

    $ oc create -f example-ingress.yaml

Verification

  • Verify that OpenShift Container Platform has created the expected route for the Ingress object by running the following command:

    $ oc get routes -o yaml

    Example output

    apiVersion: v1
    items:
    - apiVersion: route.openshift.io/v1
      kind: Route
      metadata:
        name: frontend-j9sdd 1
        ...
      spec:
      ...
        tls: 2
          insecureEdgeTerminationPolicy: Redirect
          termination: edge 3
      ...

    1
    The name of the route includes the name of the Ingress object followed by a random suffix.
    2
    In order to use the default certificate, the route should not specify spec.certificate.
    3
    The route should specify the edge termination policy.

26.1.14. Creating a route using the destination CA certificate in the Ingress annotation

The route.openshift.io/destination-ca-certificate-secret annotation can be used on an Ingress object to define a route with a custom destination CA certificate.

Prerequisites

  • You may have a certificate/key pair in PEM-encoded files, where the certificate is valid for the route host.
  • You may have a separate CA certificate in a PEM-encoded file that completes the certificate chain.
  • You must have a separate destination CA certificate in a PEM-encoded file.
  • You must have a service that you want to expose.

Procedure

  1. Add the route.openshift.io/destination-ca-certificate-secret to the Ingress annotations:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: frontend
      annotations:
        route.openshift.io/termination: "reencrypt"
        route.openshift.io/destination-ca-certificate-secret: secret-ca-cert 1
    ...
    1
    The annotation references a kubernetes secret.
  2. The secret referenced in this annotation will be inserted into the generated route.

    Example output

    apiVersion: route.openshift.io/v1
    kind: Route
    metadata:
      name: frontend
      annotations:
        route.openshift.io/termination: reencrypt
        route.openshift.io/destination-ca-certificate-secret: secret-ca-cert
    spec:
    ...
      tls:
        insecureEdgeTerminationPolicy: Redirect
        termination: reencrypt
        destinationCACertificate: |
          -----BEGIN CERTIFICATE-----
          [...]
          -----END CERTIFICATE-----
    ...

26.1.15. Configuring the OpenShift Container Platform Ingress Controller for dual-stack networking

If your OpenShift Container Platform cluster is configured for IPv4 and IPv6 dual-stack networking, your cluster is externally reachable by OpenShift Container Platform routes.

The Ingress Controller automatically serves services that have both IPv4 and IPv6 endpoints, but you can configure the Ingress Controller for single-stack or dual-stack services.

Prerequisites

  • You deployed an OpenShift Container Platform cluster on bare metal.
  • You installed the OpenShift CLI (oc).

Procedure

  1. To have the Ingress Controller serve traffic over IPv4/IPv6 to a workload, you can create a service YAML file or modify an existing service YAML file by setting the ipFamilies and ipFamilyPolicy fields. For example:

    Sample service YAML file

    apiVersion: v1
    kind: Service
    metadata:
      creationTimestamp: yyyy-mm-ddT00:00:00Z
      labels:
        name: <service_name>
        manager: kubectl-create
        operation: Update
        time: yyyy-mm-ddT00:00:00Z
      name: <service_name>
      namespace: <namespace_name>
      resourceVersion: "<resource_version_number>"
      selfLink: "/api/v1/namespaces/<namespace_name>/services/<service_name>"
      uid: <uid_number>
    spec:
      clusterIP: 172.30.0.0/16
      clusterIPs: 1
      - 172.30.0.0/16
      - <second_IP_address>
      ipFamilies: 2
      - IPv4
      - IPv6
      ipFamilyPolicy: RequireDualStack 3
      ports:
      - port: 8080
        protocol: TCP
        targetport: 8080
      selector:
        name: <namespace_name>
      sessionAffinity: None
      type: ClusterIP
    status:
      loadbalancer: {}

    1
    In a dual-stack instance, there are two different clusterIPs provided.
    2
    For a single-stack instance, enter IPv4 or IPv6. For a dual-stack instance, enter both IPv4 and IPv6.
    3
    For a single-stack instance, enter SingleStack. For a dual-stack instance, enter RequireDualStack.

    These resources generate corresponding endpoints. The Ingress Controller now watches endpointslices.

  2. To view endpoints, enter the following command:

    $ oc get endpoints
  3. To view endpointslices, enter the following command:

    $ oc get endpointslices

26.2. Secured routes

Secure routes provide the ability to use several types of TLS termination to serve certificates to the client. The following sections describe how to create re-encrypt, edge, and passthrough routes with custom certificates.

Important

If you create routes in Microsoft Azure through public endpoints, the resource names are subject to restriction. You cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation.

26.2.1. Creating a re-encrypt route with a custom certificate

You can configure a secure route using reencrypt TLS termination with a custom certificate by using the oc create route command.

Prerequisites

  • You must have a certificate/key pair in PEM-encoded files, where the certificate is valid for the route host.
  • You may have a separate CA certificate in a PEM-encoded file that completes the certificate chain.
  • You must have a separate destination CA certificate in a PEM-encoded file.
  • You must have a service that you want to expose.
Note

Password protected key files are not supported. To remove a passphrase from a key file, use the following command:

$ openssl rsa -in password_protected_tls.key -out tls.key

Procedure

This procedure creates a Route resource with a custom certificate and reencrypt TLS termination. The following assumes that the certificate/key pair are in the tls.crt and tls.key files in the current working directory. You must also specify a destination CA certificate to enable the Ingress Controller to trust the service’s certificate. You may also specify a CA certificate if needed to complete the certificate chain. Substitute the actual path names for tls.crt, tls.key, cacert.crt, and (optionally) ca.crt. Substitute the name of the Service resource that you want to expose for frontend. Substitute the appropriate hostname for www.example.com.

  • Create a secure Route resource using reencrypt TLS termination and a custom certificate:

    $ oc create route reencrypt --service=frontend --cert=tls.crt --key=tls.key --dest-ca-cert=destca.crt --ca-cert=ca.crt --hostname=www.example.com

    If you examine the resulting Route resource, it should look similar to the following:

    YAML Definition of the Secure Route

    apiVersion: route.openshift.io/v1
    kind: Route
    metadata:
      name: frontend
    spec:
      host: www.example.com
      to:
        kind: Service
        name: frontend
      tls:
        termination: reencrypt
        key: |-
          -----BEGIN PRIVATE KEY-----
          [...]
          -----END PRIVATE KEY-----
        certificate: |-
          -----BEGIN CERTIFICATE-----
          [...]
          -----END CERTIFICATE-----
        caCertificate: |-
          -----BEGIN CERTIFICATE-----
          [...]
          -----END CERTIFICATE-----
        destinationCACertificate: |-
          -----BEGIN CERTIFICATE-----
          [...]
          -----END CERTIFICATE-----

    See oc create route reencrypt --help for more options.

26.2.2. Creating an edge route with a custom certificate

You can configure a secure route using edge TLS termination with a custom certificate by using the oc create route command. With an edge route, the Ingress Controller terminates TLS encryption before forwarding traffic to the destination pod. The route specifies the TLS certificate and key that the Ingress Controller uses for the route.

Prerequisites

  • You must have a certificate/key pair in PEM-encoded files, where the certificate is valid for the route host.
  • You may have a separate CA certificate in a PEM-encoded file that completes the certificate chain.
  • You must have a service that you want to expose.
Note

Password protected key files are not supported. To remove a passphrase from a key file, use the following command:

$ openssl rsa -in password_protected_tls.key -out tls.key

Procedure

This procedure creates a Route resource with a custom certificate and edge TLS termination. The following assumes that the certificate/key pair are in the tls.crt and tls.key files in the current working directory. You may also specify a CA certificate if needed to complete the certificate chain. Substitute the actual path names for tls.crt, tls.key, and (optionally) ca.crt. Substitute the name of the service that you want to expose for frontend. Substitute the appropriate hostname for www.example.com.

  • Create a secure Route resource using edge TLS termination and a custom certificate.

    $ oc create route edge --service=frontend --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=www.example.com

    If you examine the resulting Route resource, it should look similar to the following:

    YAML Definition of the Secure Route

    apiVersion: route.openshift.io/v1
    kind: Route
    metadata:
      name: frontend
    spec:
      host: www.example.com
      to:
        kind: Service
        name: frontend
      tls:
        termination: edge
        key: |-
          -----BEGIN PRIVATE KEY-----
          [...]
          -----END PRIVATE KEY-----
        certificate: |-
          -----BEGIN CERTIFICATE-----
          [...]
          -----END CERTIFICATE-----
        caCertificate: |-
          -----BEGIN CERTIFICATE-----
          [...]
          -----END CERTIFICATE-----

    See oc create route edge --help for more options.

26.2.3. Creating a passthrough route

You can configure a secure route using passthrough termination by using the oc create route command. With passthrough termination, encrypted traffic is sent straight to the destination without the router providing TLS termination. Therefore no key or certificate is required on the route.

Prerequisites

  • You must have a service that you want to expose.

Procedure

  • Create a Route resource:

    $ oc create route passthrough route-passthrough-secured --service=frontend --port=8080

    If you examine the resulting Route resource, it should look similar to the following:

    A Secured Route Using Passthrough Termination

    apiVersion: route.openshift.io/v1
    kind: Route
    metadata:
      name: route-passthrough-secured 1
    spec:
      host: www.example.com
      port:
        targetPort: 8080
      tls:
        termination: passthrough 2
        insecureEdgeTerminationPolicy: None 3
      to:
        kind: Service
        name: frontend

    1
    The name of the object, which is limited to 63 characters.
    2
    The termination field is set to passthrough. This is the only required tls field.
    3
    Optional insecureEdgeTerminationPolicy. The only valid values are None, Redirect, or empty for disabled.

    The destination pod is responsible for serving certificates for the traffic at the endpoint. This is currently the only method that can support requiring client certificates, also known as two-way authentication.

Red Hat logoGithubRedditYoutubeTwitter

Formazione

Prova, acquista e vendi

Community

Informazioni sulla documentazione di Red Hat

Aiutiamo gli utenti Red Hat a innovarsi e raggiungere i propri obiettivi con i nostri prodotti e servizi grazie a contenuti di cui possono fidarsi.

Rendiamo l’open source più inclusivo

Red Hat si impegna a sostituire il linguaggio problematico nel codice, nella documentazione e nelle proprietà web. Per maggiori dettagli, visita ilBlog di Red Hat.

Informazioni su Red Hat

Forniamo soluzioni consolidate che rendono più semplice per le aziende lavorare su piattaforme e ambienti diversi, dal datacenter centrale all'edge della rete.

© 2024 Red Hat, Inc.