Este contenido no está disponible en el idioma seleccionado.

Chapter 6. Ingress Operator in OpenShift Container Platform


6.1. OpenShift Container Platform Ingress Operator

When you create your OpenShift Container Platform cluster, pods and services running on the cluster are each allocated their own IP addresses. The IP addresses are accessible to other pods and services running nearby but are not accessible to outside clients. The Ingress Operator implements the

IngressController
API and is the component responsible for enabling external access to OpenShift Container Platform cluster services.

The Ingress Operator makes it possible for external clients to access your service by deploying and managing one or more HAProxy-based Ingress Controllers to handle routing. You can use the Ingress Operator to route traffic by specifying OpenShift Container Platform

Route
and Kubernetes
Ingress
resources. Configurations within the Ingress Controller, such as the ability to define
endpointPublishingStrategy
type and internal load balancing, provide ways to publish Ingress Controller endpoints.

6.2. The Ingress configuration asset

The installation program generates an asset with an

Ingress
resource in the
config.openshift.io
API group,
cluster-ingress-02-config.yml
.

YAML Definition of the Ingress resource

apiVersion: config.openshift.io/v1
kind: Ingress
metadata:
  name: cluster
spec:
  domain: apps.openshiftdemos.com

The installation program stores this asset in the

cluster-ingress-02-config.yml
file in the
manifests/
directory. This
Ingress
resource defines the cluster-wide configuration for Ingress. This Ingress configuration is used as follows:

  • The Ingress Operator uses the domain from the cluster Ingress configuration as the domain for the default Ingress Controller.
  • The OpenShift API Server Operator uses the domain from the cluster Ingress configuration. This domain is also used when generating a default host for a
    Route
    resource that does not specify an explicit host.

6.3. Ingress Controller configuration parameters

The

ingresscontrollers.operator.openshift.io
resource offers the following configuration parameters.

Expand
ParameterDescription

domain

domain
is a DNS name serviced by the Ingress Controller and is used to configure multiple features:

  • For the
    LoadBalancerService
    endpoint publishing strategy,
    domain
    is used to configure DNS records. See
    endpointPublishingStrategy
    .
  • When using a generated default certificate, the certificate is valid for
    domain
    and its
    subdomains
    . See
    defaultCertificate
    .
  • The value is published to individual Route statuses so that users know where to target external DNS records.

The

domain
value must be unique among all Ingress Controllers and cannot be updated.

If empty, the default value is

ingress.config.openshift.io/cluster
.spec.domain
.

replicas

replicas
is the desired number of Ingress Controller replicas. If not set, the default value is
2
.

endpointPublishingStrategy

endpointPublishingStrategy
is used to publish the Ingress Controller endpoints to other networks, enable load balancer integrations, and provide access to other systems.

If not set, the default value is based on

infrastructure.config.openshift.io/cluster
.status.platform
:

  • AWS:
    LoadBalancerService
    (with external scope)
  • Azure:
    LoadBalancerService
    (with external scope)
  • GCP:
    LoadBalancerService
    (with external scope)
  • Bare metal:
    NodePortService
  • Other:
    HostNetwork

For most platforms, the

endpointPublishingStrategy
value cannot be updated. However, on GCP, you can configure the
loadbalancer.providerParameters.gcp.clientAccess
subfield.

defaultCertificate

The

defaultCertificate
value is a reference to a secret that contains the default certificate that is served by the Ingress Controller. When Routes do not specify their own certificate,
defaultCertificate
is used.

The secret must contain the following keys and data: *

tls.crt
: certificate file contents *
tls.key
: key file contents

If not set, a wildcard certificate is automatically generated and used. The certificate is valid for the Ingress Controller

domain
and
subdomains
, and the generated certificate’s CA is automatically integrated with the cluster’s trust store.

The in-use certificate, whether generated or user-specified, is automatically integrated with OpenShift Container Platform built-in OAuth server.

namespaceSelector

namespaceSelector
is used to filter the set of namespaces serviced by the Ingress Controller. This is useful for implementing shards.

routeSelector

routeSelector
is used to filter the set of Routes serviced by the Ingress Controller. This is useful for implementing shards.

nodePlacement

nodePlacement
enables explicit control over the scheduling of the Ingress Controller.

If not set, the defaults values are used.

Note

The

nodePlacement
parameter includes two parts,
nodeSelector
and
tolerations
. For example:

nodePlacement:
 nodeSelector:
   matchLabels:
     kubernetes.io/os: linux
 tolerations:
 - effect: NoSchedule
   operator: Exists

tlsSecurityProfile

tlsSecurityProfile
specifies settings for TLS connections for Ingress Controllers.

If not set, the default value is based on the

apiservers.config.openshift.io/cluster
resource.

When using the

Old
,
Intermediate
, and
Modern
profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the
Intermediate
profile deployed on release
X.Y.Z
, an upgrade to release
X.Y.Z+1
may cause a new profile configuration to be applied to the Ingress Controller, resulting in a rollout.

The minimum TLS version for Ingress Controllers is

1.1
, and the maximum TLS version is
1.2
.

Important

The HAProxy Ingress Controller image does not support TLS

1.3
and because the
Modern
profile requires TLS
1.3
, it is not supported. The Ingress Operator converts the
Modern
profile to
Intermediate
.

The Ingress Operator also converts the TLS

1.0
of an
Old
or
Custom
profile to
1.1
, and TLS
1.3
of a
Custom
profile to
1.2
.

OpenShift Container Platform router enables Red Hat-distributed OpenSSL default set of TLS

1.3
cipher suites, which uses TLS_AES_128_CCM_SHA256, TLS_CHACHA20_POLY1305_SHA256, TLS_AES_256_GCM_SHA384, and TLS_AES_128_GCM_SHA256. Your cluster might accept TLS
1.3
connections and cipher suites, even though TLS
1.3
is unsupported in OpenShift Container Platform 4.6, 4.7, and 4.8.

Note

Ciphers and the minimum TLS version of the configured security profile are reflected in the

TLSProfile
status.

routeAdmission

routeAdmission
defines a policy for handling new route claims, such as allowing or denying claims across namespaces.

namespaceOwnership
describes how hostname claims across namespaces should be handled. The default is
Strict
.

  • Strict
    : does not allow routes to claim the same hostname across namespaces.
  • InterNamespaceAllowed
    : allows routes to claim different paths of the same hostname across namespaces.

wildcardPolicy
describes how routes with wildcard policies are handled by the Ingress Controller.

  • WildcardsAllowed
    : Indicates routes with any wildcard policy are admitted by the Ingress Controller.
  • WildcardsDisallowed
    : Indicates only routes with a wildcard policy of
    None
    are admitted by the Ingress Controller. Updating
    wildcardPolicy
    from
    WildcardsAllowed
    to
    WildcardsDisallowed
    causes admitted routes with a wildcard policy of
    Subdomain
    to stop working. These routes must be recreated to a wildcard policy of
    None
    to be readmitted by the Ingress Controller.
    WildcardsDisallowed
    is the default setting.

IngressControllerLogging

logging
defines parameters for what is logged where. If this field is empty, operational logs are enabled but access logs are disabled.

  • access
    describes how client requests are logged. If this field is empty, access logging is disabled.

    • destination
      describes a destination for log messages.

      • type
        is the type of destination for logs:

        • Container
          specifies that logs should go to a sidecar container. The Ingress Operator configures the container, named logs, on the Ingress Controller pod and configures the Ingress Controller to write logs to the container. The expectation is that the administrator configures a custom logging solution that reads logs from this container. Using container logs means that logs may be dropped if the rate of logs exceeds the container runtime capacity or the custom logging solution capacity.
        • Syslog
          specifies that logs are sent to a Syslog endpoint. The administrator must specify an endpoint that can receive Syslog messages. The expectation is that the administrator has configured a custom Syslog instance.
      • container
        describes parameters for the
        Container
        logging destination type. Currently there are no parameters for container logging, so this field must be empty.
      • syslog
        describes parameters for the
        Syslog
        logging destination type:

        • address
          is the IP address of the syslog endpoint that receives log messages.
        • port
          is the UDP port number of the syslog endpoint that receives log messages.
        • facility
          specifies the syslog facility of log messages. If this field is empty, the facility is
          local1
          . Otherwise, it must specify a valid syslog facility:
          kern
          ,
          user
          ,
          mail
          ,
          daemon
          ,
          auth
          ,
          syslog
          ,
          lpr
          ,
          news
          ,
          uucp
          ,
          cron
          ,
          auth2
          ,
          ftp
          ,
          ntp
          ,
          audit
          ,
          alert
          ,
          cron2
          ,
          local0
          ,
          local1
          ,
          local2
          ,
          local3
          .
          local4
          ,
          local5
          ,
          local6
          , or
          local7
          .
    • httpLogFormat
      specifies the format of the log message for an HTTP request. If this field is empty, log messages use the implementation’s default HTTP log format. For HAProxy’s default HTTP log format, see the HAProxy documentation.

httpHeaders

httpHeaders
defines the policy for HTTP headers.

By setting the

forwardedHeaderPolicy
for the
IngressControllerHTTPHeaders
, you specify when and how the Ingress controller sets the
Forwarded
,
X-Forwarded-For
,
X-Forwarded-Host
,
X-Forwarded-Port
,
X-Forwarded-Proto
, and
X-Forwarded-Proto-Version
HTTP headers.

By default, the policy is set to

Append
.

  • Append
    specifies that the Ingress Controller appends the headers, preserving any existing headers.
  • Replace
    specifies that the Ingress Controller sets the headers, removing any existing headers.
  • IfNone
    specifies that the Ingress Controller sets the headers if they are not already set.
  • Never
    specifies that the Ingress Controller never sets the headers, preserving any existing headers.

By setting

headerNameCaseAdjustments
, you can specify case adjustments that can be applied to HTTP header names. Each adjustment is specified as an HTTP header name with the desired capitalization. For example, specifying
X-Forwarded-For
indicates that the
x-forwarded-for
HTTP header should be adjusted to have the specified capitalization.

These adjustments are only applied to cleartext, edge-terminated, and re-encrypt routes, and only when using HTTP/1.

For request headers, these adjustments are applied only for routes that have the

haproxy.router.openshift.io/h1-adjust-case=true
annotation. For response headers, these adjustments are applied to all HTTP responses. If this field is empty, no request headers are adjusted.

httpCompression

httpCompression
defines the policy for HTTP traffic compression.

  • mimeTypes
    defines a list of MIME types to which compression should be applied. For example,
    text/css; charset=utf-8
    ,
    text/html
    ,
    text/*
    ,
    image/svg+xml
    ,
    application/octet-stream
    ,
    X-custom/customsub
    , using the format pattern,
    type/subtype; [;attribute=value]
    . The
    types
    are: application, image, message, multipart, text, video, or a custom type prefaced by
    X-
    ; e.g. To see the full notation for MIME types and subtypes, see RFC1341

httpErrorCodePages

httpErrorCodePages
specifies custom HTTP error code response pages. By default, an IngressController uses error pages built into the IngressController image.

httpCaptureCookies

httpCaptureCookies
specifies HTTP cookies that you want to capture in access logs. If the
httpCaptureCookies
field is empty, the access logs do not capture the cookies.

For any cookie that you want to capture, the following parameters must be in your

IngressController
configuration:

  • name
    specifies the name of the cookie.
  • maxLength
    specifies tha maximum length of the cookie.
  • matchType
    specifies if the field
    name
    of the cookie exactly matches the capture cookie setting or is a prefix of the capture cookie setting. The
    matchType
    field uses the
    Exact
    and
    Prefix
    parameters.

For example:

  httpCaptureCookies:
  - matchType: Exact
    maxLength: 128
    name: MYCOOKIE

httpCaptureHeaders

httpCaptureHeaders
specifies the HTTP headers that you want to capture in the access logs. If the
httpCaptureHeaders
field is empty, the access logs do not capture the headers.

httpCaptureHeaders
contains two lists of headers to capture in the access logs. The two lists of header fields are
request
and
response
. In both lists, the
name
field must specify the header name and the
maxlength
field must specify the maximum length of the header. For example:

  httpCaptureHeaders:
    request:
    - maxLength: 256
      name: Connection
    - maxLength: 128
      name: User-Agent
    response:
    - maxLength: 256
      name: Content-Type
    - maxLength: 256
      name: Content-Length

tuningOptions

tuningOptions
specifies options for tuning the performance of Ingress Controller pods.

  • headerBufferBytes
    specifies how much memory is reserved, in bytes, for Ingress Controller connection sessions. This value must be at least
    16384
    if HTTP/2 is enabled for the Ingress Controller. If not set, the default value is
    32768
    bytes. Setting this field not recommended because
    headerBufferBytes
    values that are too small can break the Ingress Controller, and
    headerBufferBytes
    values that are too large could cause the Ingress Controller to use significantly more memory than necessary.
  • headerBufferMaxRewriteBytes
    specifies how much memory should be reserved, in bytes, from
    headerBufferBytes
    for HTTP header rewriting and appending for Ingress Controller connection sessions. The minimum value for
    headerBufferMaxRewriteBytes
    is
    4096
    .
    headerBufferBytes
    must be greater than
    headerBufferMaxRewriteBytes
    for incoming HTTP requests. If not set, the default value is
    8192
    bytes. Setting this field not recommended because
    headerBufferMaxRewriteBytes
    values that are too small can break the Ingress Controller and
    headerBufferMaxRewriteBytes
    values that are too large could cause the Ingress Controller to use significantly more memory than necessary.
  • threadCount
    specifies the number of threads to create per HAProxy process. Creating more threads allows each Ingress Controller pod to handle more connections, at the cost of more system resources being used. HAProxy supports up to
    64
    threads. If this field is empty, the Ingress Controller uses the default value of
    4
    threads. The default value can change in future releases. Setting this field is not recommended because increasing the number of HAProxy threads allows Ingress Controller pods to use more CPU time under load, and prevent other pods from receiving the CPU resources they need to perform. Reducing the number of threads can cause the Ingress Controller to perform poorly.
Note

All parameters are optional.

6.3.1. Ingress Controller TLS security profiles

TLS security profiles provide a way for servers to regulate which ciphers a connecting client can use when connecting to the server.

6.3.1.1. Understanding TLS security profiles

You can use a TLS (Transport Layer Security) security profile to define which TLS ciphers are required by various OpenShift Container Platform components. The OpenShift Container Platform TLS security profiles are based on Mozilla recommended configurations.

You can specify one of the following TLS security profiles for each component:

Expand
Table 6.1. TLS security profiles
ProfileDescription

Old

This profile is intended for use with legacy clients or libraries. The profile is based on the Old backward compatibility recommended configuration.

The

Old
profile requires a minimum TLS version of 1.0.

Note

For the Ingress Controller, the minimum TLS version is converted from 1.0 to 1.1.

Intermediate

This profile is the recommended configuration for the majority of clients. It is the default TLS security profile for the Ingress Controller, kubelet, and control plane. The profile is based on the Intermediate compatibility recommended configuration.

The

Intermediate
profile requires a minimum TLS version of 1.2.

Modern

This profile is intended for use with modern clients that have no need for backwards compatibility. This profile is based on the Modern compatibility recommended configuration.

The

Modern
profile requires a minimum TLS version of 1.3.

Note

In OpenShift Container Platform 4.6, 4.7, and 4.8, the

Modern
profile is unsupported. If selected, the
Intermediate
profile is enabled.

Important

The

Modern
profile is currently not supported.

Custom

This profile allows you to define the TLS version and ciphers to use.

Warning

Use caution when using a

Custom
profile, because invalid configurations can cause problems.

Note

OpenShift Container Platform router enables Red Hat-distributed OpenSSL default set of TLS

1.3
cipher suites. Your cluster might accept TLS
1.3
connections and cipher suites, even though TLS
1.3
is unsupported in OpenShift Container Platform 4.6, 4.7, and 4.8.

Note

When using one of the predefined profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 might cause a new profile configuration to be applied, resulting in a rollout.

To configure a TLS security profile for an Ingress Controller, edit the

IngressController
custom resource (CR) to specify a predefined or custom TLS security profile. If a TLS security profile is not configured, the default value is based on the TLS security profile set for the API server.

Sample IngressController CR that configures the Old TLS security profile

apiVersion: operator.openshift.io/v1
kind: IngressController
 ...
spec:
  tlsSecurityProfile:
    old: {}
    type: Old
 ...

The TLS security profile defines the minimum TLS version and the TLS ciphers for TLS connections for Ingress Controllers.

You can see the ciphers and the minimum TLS version of the configured TLS security profile in the

IngressController
custom resource (CR) under
Status.Tls Profile
and the configured TLS security profile under
Spec.Tls Security Profile
. For the
Custom
TLS security profile, the specific ciphers and minimum TLS version are listed under both parameters.

Important

The HAProxy Ingress Controller image does not support TLS

1.3
and because the
Modern
profile requires TLS
1.3
, it is not supported. The Ingress Operator converts the
Modern
profile to
Intermediate
.

The Ingress Operator also converts the TLS

1.0
of an
Old
or
Custom
profile to
1.1
, and TLS
1.3
of a
Custom
profile to
1.2
.

Prerequisites

  • You have access to the cluster as a user with the
    cluster-admin
    role.

Procedure

  1. Edit the

    IngressController
    CR in the
    openshift-ingress-operator
    project to configure the TLS security profile:

    $ oc edit IngressController default -n openshift-ingress-operator
  2. Add the

    spec.tlsSecurityProfile
    field:

    Sample IngressController CR for a Custom profile

    apiVersion: operator.openshift.io/v1
    kind: IngressController
     ...
    spec:
      tlsSecurityProfile:
        type: Custom 
    1
    
        custom: 
    2
    
          ciphers: 
    3
    
          - ECDHE-ECDSA-CHACHA20-POLY1305
          - ECDHE-RSA-CHACHA20-POLY1305
          - ECDHE-RSA-AES128-GCM-SHA256
          - ECDHE-ECDSA-AES128-GCM-SHA256
          minTLSVersion: VersionTLS11
     ...

    1
    Specify the TLS security profile type (Old, Intermediate, or Custom). The default is Intermediate.
    2
    Specify the appropriate field for the selected type:
    • old: {}
    • intermediate: {}
    • custom:
    3
    For the custom type, specify a list of TLS ciphers and minimum accepted TLS version.
  3. Save the file to apply the changes.

Verification

  • Verify that the profile is set in the

    IngressController
    CR:

    $ oc describe IngressController default -n openshift-ingress-operator

    Example output

    Name:         default
    Namespace:    openshift-ingress-operator
    Labels:       <none>
    Annotations:  <none>
    API Version:  operator.openshift.io/v1
    Kind:         IngressController
     ...
    Spec:
     ...
      Tls Security Profile:
        Custom:
          Ciphers:
            ECDHE-ECDSA-CHACHA20-POLY1305
            ECDHE-RSA-CHACHA20-POLY1305
            ECDHE-RSA-AES128-GCM-SHA256
            ECDHE-ECDSA-AES128-GCM-SHA256
          Min TLS Version:  VersionTLS11
        Type:               Custom
     ...

6.3.2. Ingress controller endpoint publishing strategy

NodePortService endpoint publishing strategy

The

NodePortService
endpoint publishing strategy publishes the Ingress Controller using a Kubernetes NodePort service.

In this configuration, the Ingress Controller deployment uses container networking. A

NodePortService
is created to publish the deployment. The specific node ports are dynamically allocated by OpenShift Container Platform; however, to support static port allocations, your changes to the node port field of the managed
NodePortService
are preserved.

Figure 6.1. Diagram of NodePortService

OpenShift Container Platform Ingress NodePort endpoint publishing strategy

The preceding graphic shows the following concepts pertaining to OpenShift Container Platform Ingress NodePort endpoint publishing strategy:

  • All the available nodes in the cluster have their own, externally accessible IP addresses. The service running in the cluster is bound to the unique NodePort for all the nodes.
  • When the client connects to a node that is down, for example, by connecting the
    10.0.128.4
    IP address in the graphic, the node port directly connects the client to an available node that is running the service. In this scenario, no load balancing is required. As the image shows, the
    10.0.128.4
    address is down and another IP address must be used instead.
Note

The Ingress Operator ignores any updates to

.spec.ports[].nodePort
fields of the service.

By default, ports are allocated automatically and you can access the port allocations for integrations. However, sometimes static port allocations are necessary to integrate with existing infrastructure which may not be easily reconfigured in response to dynamic ports. To achieve integrations with static node ports, you can update the managed service resource directly.

For more information, see the Kubernetes Services documentation on NodePort.

HostNetwork endpoint publishing strategy

The

HostNetwork
endpoint publishing strategy publishes the Ingress Controller on node ports where the Ingress Controller is deployed.

An Ingress controller with the

HostNetwork
endpoint publishing strategy can have only one pod replica per node. If you want n replicas, you must use at least n nodes where those replicas can be scheduled. Because each pod replica requests ports
80
and
443
on the node host where it is scheduled, a replica cannot be scheduled to a node if another pod on the same node is using those ports.

6.4. View the default Ingress Controller

The Ingress Operator is a core feature of OpenShift Container Platform and is enabled out of the box.

Every new OpenShift Container Platform installation has an

ingresscontroller
named default. It can be supplemented with additional Ingress Controllers. If the default
ingresscontroller
is deleted, the Ingress Operator will automatically recreate it within a minute.

Procedure

  • View the default Ingress Controller:

    $ oc describe --namespace=openshift-ingress-operator ingresscontroller/default

6.5. View Ingress Operator status

You can view and inspect the status of your Ingress Operator.

Procedure

  • View your Ingress Operator status:

    $ oc describe clusteroperators/ingress

6.6. View Ingress Controller logs

You can view your Ingress Controller logs.

Procedure

  • View your Ingress Controller logs:

    $ oc logs --namespace=openshift-ingress-operator deployments/ingress-operator

6.7. View Ingress Controller status

Your can view the status of a particular Ingress Controller.

Procedure

  • View the status of an Ingress Controller:

    $ oc describe --namespace=openshift-ingress-operator ingresscontroller/<name>

6.8. Configuring the Ingress Controller

6.8.1. Setting a custom default certificate

As an administrator, you can configure an Ingress Controller to use a custom certificate by creating a Secret resource and editing the

IngressController
custom resource (CR).

Prerequisites

  • You must have a certificate/key pair in PEM-encoded files, where the certificate is signed by a trusted certificate authority or by a private trusted certificate authority that you configured in a custom PKI.
  • Your certificate meets the following requirements:

    • The certificate is valid for the ingress domain.
    • The certificate uses the
      subjectAltName
      extension to specify a wildcard domain, such as
      *.apps.ocp4.example.com
      .
  • You must have an

    IngressController
    CR. You may use the default one:

    $ oc --namespace openshift-ingress-operator get ingresscontrollers

    Example output

    NAME      AGE
    default   10m

Note

If you have intermediate certificates, they must be included in the

tls.crt
file of the secret containing a custom default certificate. Order matters when specifying a certificate; list your intermediate certificate(s) after any server certificate(s).

Procedure

The following assumes that the custom certificate and key pair are in the

tls.crt
and
tls.key
files in the current working directory. Substitute the actual path names for
tls.crt
and
tls.key
. You also may substitute another name for
custom-certs-default
when creating the Secret resource and referencing it in the IngressController CR.

Note

This action will cause the Ingress Controller to be redeployed, using a rolling deployment strategy.

  1. Create a Secret resource containing the custom certificate in the

    openshift-ingress
    namespace using the
    tls.crt
    and
    tls.key
    files.

    $ oc --namespace openshift-ingress create secret tls custom-certs-default --cert=tls.crt --key=tls.key
  2. Update the IngressController CR to reference the new certificate secret:

    $ oc patch --type=merge --namespace openshift-ingress-operator ingresscontrollers/default \
      --patch '{"spec":{"defaultCertificate":{"name":"custom-certs-default"}}}'
  3. Verify the update was effective:

    $ echo Q |\
      openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null |\
      openssl x509 -noout -subject -issuer -enddate

    where:

    <domain>
    Specifies the base domain name for your cluster.

    Example output

    subject=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = *.apps.example.com
    issuer=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = example.com
    notAfter=May 10 08:32:45 2022 GM

    Tip

    You can alternatively apply the following YAML to set a custom default certificate:

    apiVersion: operator.openshift.io/v1
    kind: IngressController
    metadata:
      name: default
      namespace: openshift-ingress-operator
    spec:
      defaultCertificate:
        name: custom-certs-default

    The certificate secret name should match the value used to update the CR.

Once the IngressController CR has been modified, the Ingress Operator updates the Ingress Controller’s deployment to use the custom certificate.

6.8.2. Removing a custom default certificate

As an administrator, you can remove a custom certificate that you configured an Ingress Controller to use.

Prerequisites

  • You have access to the cluster as a user with the
    cluster-admin
    role.
  • You have installed the OpenShift CLI (
    oc
    ).
  • You previously configured a custom default certificate for the Ingress Controller.

Procedure

  • To remove the custom certificate and restore the certificate that ships with OpenShift Container Platform, enter the following command:

    $ oc patch -n openshift-ingress-operator ingresscontrollers/default \
      --type json -p $'- op: remove\n  path: /spec/defaultCertificate'

    There can be a delay while the cluster reconciles the new certificate configuration.

Verification

  • To confirm that the original cluster certificate is restored, enter the following command:

    $ echo Q | \
      openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null | \
      openssl x509 -noout -subject -issuer -enddate

    where:

    <domain>
    Specifies the base domain name for your cluster.

    Example output

    subject=CN = *.apps.<domain>
    issuer=CN = ingress-operator@1620633373
    notAfter=May 10 10:44:36 2023 GMT

6.8.3. Scaling an Ingress Controller

Manually scale an Ingress Controller to meeting routing performance or availability requirements such as the requirement to increase throughput.

oc
commands are used to scale the
IngressController
resource. The following procedure provides an example for scaling up the default
IngressController
.

Note

Scaling is not an immediate action, as it takes time to create the desired number of replicas.

Procedure

  1. View the current number of available replicas for the default

    IngressController
    :

    $ oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{$.status.availableReplicas}'

    Example output

    2

  2. Scale the default

    IngressController
    to the desired number of replicas using the
    oc patch
    command. The following example scales the default
    IngressController
    to 3 replicas:

    $ oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"replicas": 3}}' --type=merge

    Example output

    ingresscontroller.operator.openshift.io/default patched

  3. Verify that the default

    IngressController
    scaled to the number of replicas that you specified:

    $ oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{$.status.availableReplicas}'

    Example output

    3

    Tip

    You can alternatively apply the following YAML to scale an Ingress Controller to three replicas:

    apiVersion: operator.openshift.io/v1
    kind: IngressController
    metadata:
      name: default
      namespace: openshift-ingress-operator
    spec:
      replicas: 3               
    1
    1
    If you need a different amount of replicas, change the replicas value.

6.8.4. Configuring Ingress access logging

You can configure the Ingress Controller to enable access logs. If you have clusters that do not receive much traffic, then you can log to a sidecar. If you have high traffic clusters, to avoid exceeding the capacity of the logging stack or to integrate with a logging infrastructure outside of OpenShift Container Platform, you can forward logs to a custom syslog endpoint. You can also specify the format for access logs.

Container logging is useful to enable access logs on low-traffic clusters when there is no existing Syslog logging infrastructure, or for short-term use while diagnosing problems with the Ingress Controller.

Syslog is needed for high-traffic clusters where access logs could exceed the OpenShift Logging stack’s capacity, or for environments where any logging solution needs to integrate with an existing Syslog logging infrastructure. The Syslog use-cases can overlap.

Prerequisites

  • Log in as a user with
    cluster-admin
    privileges.

Procedure

Configure Ingress access logging to a sidecar.

  • To configure Ingress access logging, you must specify a destination using

    spec.logging.access.destination
    . To specify logging to a sidecar container, you must specify
    Container
    spec.logging.access.destination.type
    . The following example is an Ingress Controller definition that logs to a
    Container
    destination:

    apiVersion: operator.openshift.io/v1
    kind: IngressController
    metadata:
      name: default
      namespace: openshift-ingress-operator
    spec:
      replicas: 2
      logging:
        access:
          destination:
            type: Container
  • When you configure the Ingress Controller to log to a sidecar, the operator creates a container named

    logs
    inside the Ingress Controller Pod:

    $ oc -n openshift-ingress logs deployment.apps/router-default -c logs

    Example output

    2020-05-11T19:11:50.135710+00:00 router-default-57dfc6cd95-bpmk6 router-default-57dfc6cd95-bpmk6 haproxy[108]: 174.19.21.82:39654 [11/May/2020:19:11:50.133] public be_http:hello-openshift:hello-openshift/pod:hello-openshift:hello-openshift:10.128.2.12:8080 0/0/1/0/1 200 142 - - --NI 1/1/0/0/0 0/0 "GET / HTTP/1.1"

Configure Ingress access logging to a Syslog endpoint.

  • To configure Ingress access logging, you must specify a destination using

    spec.logging.access.destination
    . To specify logging to a Syslog endpoint destination, you must specify
    Syslog
    for
    spec.logging.access.destination.type
    . If the destination type is
    Syslog
    , you must also specify a destination endpoint using
    spec.logging.access.destination.syslog.endpoint
    and you can specify a facility using
    spec.logging.access.destination.syslog.facility
    . The following example is an Ingress Controller definition that logs to a
    Syslog
    destination:

    apiVersion: operator.openshift.io/v1
    kind: IngressController
    metadata:
      name: default
      namespace: openshift-ingress-operator
    spec:
      replicas: 2
      logging:
        access:
          destination:
            type: Syslog
            syslog:
              address: 1.2.3.4
              port: 10514
    Note

    The

    syslog
    destination port must be UDP.

Configure Ingress access logging with a specific log format.

  • You can specify

    spec.logging.access.httpLogFormat
    to customize the log format. The following example is an Ingress Controller definition that logs to a
    syslog
    endpoint with IP address 1.2.3.4 and port 10514:

    apiVersion: operator.openshift.io/v1
    kind: IngressController
    metadata:
      name: default
      namespace: openshift-ingress-operator
    spec:
      replicas: 2
      logging:
        access:
          destination:
            type: Syslog
            syslog:
              address: 1.2.3.4
              port: 10514
          httpLogFormat: '%ci:%cp [%t] %ft %b/%s %B %bq %HM %HU %HV'

Disable Ingress access logging.

  • To disable Ingress access logging, leave

    spec.logging
    or
    spec.logging.access
    empty:

    apiVersion: operator.openshift.io/v1
    kind: IngressController
    metadata:
      name: default
      namespace: openshift-ingress-operator
    spec:
      replicas: 2
      logging:
        access: null

6.8.5. Setting Ingress Controller thread count

A cluster administrator can set the thread count to increase the amount of incoming connections a cluster can handle. You can patch an existing Ingress Controller to increase the amount of threads.

Prerequisites

  • The following assumes that you already created an Ingress Controller.

Procedure

  • Update the Ingress Controller to increase the number of threads:

    $ oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{"spec":{"tuningOptions": {"threadCount": 8}}}'
    Note

    If you have a node that is capable of running large amounts of resources, you can configure

    spec.nodePlacement.nodeSelector
    with labels that match the capacity of the intended node, and configure
    spec.tuningOptions.threadCount
    to an appropriately high value.

6.8.6. Ingress Controller sharding

As the primary mechanism for traffic to enter the cluster, the demands on the Ingress Controller, or router, can be significant. As a cluster administrator, you can shard the routes to:

  • Balance Ingress Controllers, or routers, with several routes to speed up responses to changes.
  • Allocate certain routes to have different reliability guarantees than other routes.
  • Allow certain Ingress Controllers to have different policies defined.
  • Allow only specific routes to use additional features.
  • Expose different routes on different addresses so that internal and external users can see different routes, for example.

Ingress Controller can use either route labels or namespace labels as a sharding method.

Ingress Controller sharding by using route labels means that the Ingress Controller serves any route in any namespace that is selected by the route selector.

Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another.

Procedure

  1. Edit the

    router-internal.yaml
    file:

    # cat router-internal.yaml
    apiVersion: v1
    items:
    - apiVersion: operator.openshift.io/v1
      kind: IngressController
      metadata:
        name: sharded
        namespace: openshift-ingress-operator
      spec:
        domain: <apps-sharded.basedomain.example.net>
        nodePlacement:
          nodeSelector:
            matchLabels:
              node-role.kubernetes.io/worker: ""
        routeSelector:
          matchLabels:
            type: sharded
      status: {}
    kind: List
    metadata:
      resourceVersion: ""
      selfLink: ""
  2. Apply the Ingress Controller

    router-internal.yaml
    file:

    # oc apply -f router-internal.yaml

    The Ingress Controller selects routes in any namespace that have the label

    type: sharded
    .

Ingress Controller sharding by using namespace labels means that the Ingress Controller serves any route in any namespace that is selected by the namespace selector.

Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another.

Warning

If you deploy the Keepalived Ingress VIP, do not deploy a non-default Ingress Controller with value

HostNetwork
for the
endpointPublishingStrategy
parameter. Doing so might cause issues. Use value
NodePort
instead of
HostNetwork
for
endpointPublishingStrategy
.

Procedure

  1. Edit the

    router-internal.yaml
    file:

    # cat router-internal.yaml

    Example output

    apiVersion: v1
    items:
    - apiVersion: operator.openshift.io/v1
      kind: IngressController
      metadata:
        name: sharded
        namespace: openshift-ingress-operator
      spec:
        domain: <apps-sharded.basedomain.example.net>
        nodePlacement:
          nodeSelector:
            matchLabels:
              node-role.kubernetes.io/worker: ""
        namespaceSelector:
          matchLabels:
            type: sharded
      status: {}
    kind: List
    metadata:
      resourceVersion: ""
      selfLink: ""

  2. Apply the Ingress Controller

    router-internal.yaml
    file:

    # oc apply -f router-internal.yaml

    The Ingress Controller selects routes in any namespace that is selected by the namespace selector that have the label

    type: sharded
    .

When creating an Ingress Controller on cloud platforms, the Ingress Controller is published by a public cloud load balancer by default. As an administrator, you can create an Ingress Controller that uses an internal cloud load balancer.

Warning

If your cloud provider is Microsoft Azure, you must have at least one public load balancer that points to your nodes. If you do not, all of your nodes will lose egress connectivity to the internet.

Important

If you want to change the

scope
for an
IngressController
object, you must delete and then recreate that
IngressController
object. You cannot change the
.spec.endpointPublishingStrategy.loadBalancer.scope
parameter after the custom resource (CR) is created.

Figure 6.2. Diagram of LoadBalancer

OpenShift Container Platform Ingress LoadBalancerService endpoint publishing strategy

The preceding graphic shows the following concepts pertaining to OpenShift Container Platform Ingress LoadBalancerService endpoint publishing strategy:

  • You can load load balance externally, using the cloud provider load balancer, or internally, using the OpenShift Ingress Controller Load Balancer.
  • You can use the single IP address of the load balancer and more familiar ports, such as 8080 and 4200 as shown on the cluster depicted in the graphic.
  • Traffic from the external load balancer is directed at the pods, and managed by the load balancer, as depicted in the instance of a down node. See the Kubernetes Services documentation for implementation details.

Prerequisites

  • Install the OpenShift CLI (
    oc
    ).
  • Log in as a user with
    cluster-admin
    privileges.

Procedure

  1. Create an

    IngressController
    custom resource (CR) in a file named
    <name>-ingress-controller.yaml
    , such as in the following example:

    apiVersion: operator.openshift.io/v1
    kind: IngressController
    metadata:
      namespace: openshift-ingress-operator
      name: <name> 
    1
    
    spec:
      domain: <domain> 
    2
    
      endpointPublishingStrategy:
        type: LoadBalancerService
        loadBalancer:
          scope: Internal 
    3
    1
    Replace <name> with a name for the IngressController object.
    2
    Specify the domain for the application published by the controller.
    3
    Specify a value of Internal to use an internal load balancer.
  2. Create the Ingress Controller defined in the previous step by running the following command:

    $ oc create -f <name>-ingress-controller.yaml 
    1
    1
    Replace <name> with the name of the IngressController object.
  3. Optional: Confirm that the Ingress Controller was created by running the following command:

    $ oc --all-namespaces=true get ingresscontrollers

6.8.8. Configuring global access for an Ingress Controller on GCP

An Ingress Controller created on GCP with an internal load balancer generates an internal IP address for the service. A cluster administrator can specify the global access option, which enables clients in any region within the same VPC network and compute region as the load balancer, to reach the workloads running on your cluster.

For more information, see the GCP documentation for global access.

Prerequisites

  • You deployed an OpenShift Container Platform cluster on GCP infrastructure.
  • You configured an Ingress Controller to use an internal load balancer.
  • You installed the OpenShift CLI (
    oc
    ).

Procedure

  1. Configure the Ingress Controller resource to allow global access.

    Note

    You can also create an Ingress Controller and specify the global access option.

    1. Configure the Ingress Controller resource:

      $ oc -n openshift-ingress-operator edit ingresscontroller/default
    2. Edit the YAML file:

      Sample clientAccess configuration to Global

        spec:
          endpointPublishingStrategy:
            loadBalancer:
              providerParameters:
                gcp:
                  clientAccess: Global 
      1
      
                type: GCP
              scope: Internal
            type: LoadBalancerService

      1
      Set gcp.clientAccess to Global.
    3. Save the file to apply the changes.
  2. Run the following command to verify that the service allows global access:

    $ oc -n openshift-ingress edit svc/router-default -o yaml

    The output shows that global access is enabled for GCP with the annotation,

    networking.gke.io/internal-load-balancer-allow-global-access
    .

You can configure the

default
Ingress Controller for your cluster to be internal by deleting and recreating it.

Warning

If your cloud provider is Microsoft Azure, you must have at least one public load balancer that points to your nodes. If you do not, all of your nodes will lose egress connectivity to the internet.

Important

If you want to change the

scope
for an
IngressController
object, you must delete and then recreate that
IngressController
object. You cannot change the
.spec.endpointPublishingStrategy.loadBalancer.scope
parameter after the custom resource (CR) is created.

Prerequisites

  • Install the OpenShift CLI (
    oc
    ).
  • Log in as a user with
    cluster-admin
    privileges.

Procedure

  1. Configure the

    default
    Ingress Controller for your cluster to be internal by deleting and recreating it.

    $ oc replace --force --wait --filename - <<EOF
    apiVersion: operator.openshift.io/v1
    kind: IngressController
    metadata:
      namespace: openshift-ingress-operator
      name: default
    spec:
      endpointPublishingStrategy:
        type: LoadBalancerService
        loadBalancer:
          scope: Internal
    EOF

6.8.10. Configuring the route admission policy

Administrators and application developers can run applications in multiple namespaces with the same domain name. This is for organizations where multiple teams develop microservices that are exposed on the same hostname.

Warning

Allowing claims across namespaces should only be enabled for clusters with trust between namespaces, otherwise a malicious user could take over a hostname. For this reason, the default admission policy disallows hostname claims across namespaces.

Prerequisites

  • Cluster administrator privileges.

Procedure

  • Edit the

    .spec.routeAdmission
    field of the
    ingresscontroller
    resource variable using the following command:

    $ oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{"spec":{"routeAdmission":{"namespaceOwnership":"InterNamespaceAllowed"}}}' --type=merge

    Sample Ingress Controller configuration

    spec:
      routeAdmission:
        namespaceOwnership: InterNamespaceAllowed
    ...

    Tip

    You can alternatively apply the following YAML to configure the route admission policy:

    apiVersion: operator.openshift.io/v1
    kind: IngressController
    metadata:
      name: default
      namespace: openshift-ingress-operator
    spec:
      routeAdmission:
        namespaceOwnership: InterNamespaceAllowed

6.8.11. Using wildcard routes

The HAProxy Ingress Controller has support for wildcard routes. The Ingress Operator uses

wildcardPolicy
to configure the
ROUTER_ALLOW_WILDCARD_ROUTES
environment variable of the Ingress Controller.

The default behavior of the Ingress Controller is to admit routes with a wildcard policy of

None
, which is backwards compatible with existing
IngressController
resources.

Procedure

  1. Configure the wildcard policy.

    1. Use the following command to edit the

      IngressController
      resource:

      $ oc edit IngressController
    2. Under

      spec
      , set the
      wildcardPolicy
      field to
      WildcardsDisallowed
      or
      WildcardsAllowed
      :

      spec:
        routeAdmission:
          wildcardPolicy: WildcardsDisallowed # or WildcardsAllowed

6.8.12. Using X-Forwarded headers

You configure the HAProxy Ingress Controller to specify a policy for how to handle HTTP headers including

Forwarded
and
X-Forwarded-For
. The Ingress Operator uses the
HTTPHeaders
field to configure the
ROUTER_SET_FORWARDED_HEADERS
environment variable of the Ingress Controller.

Procedure

  1. Configure the

    HTTPHeaders
    field for the Ingress Controller.

    1. Use the following command to edit the

      IngressController
      resource:

      $ oc edit IngressController
    2. Under

      spec
      , set the
      HTTPHeaders
      policy field to
      Append
      ,
      Replace
      ,
      IfNone
      , or
      Never
      :

      apiVersion: operator.openshift.io/v1
      kind: IngressController
      metadata:
        name: default
        namespace: openshift-ingress-operator
      spec:
        httpHeaders:
          forwardedHeaderPolicy: Append
Example use cases

As a cluster administrator, you can:

  • Configure an external proxy that injects the

    X-Forwarded-For
    header into each request before forwarding it to an Ingress Controller.

    To configure the Ingress Controller to pass the header through unmodified, you specify the

    never
    policy. The Ingress Controller then never sets the headers, and applications receive only the headers that the external proxy provides.

  • Configure the Ingress Controller to pass the

    X-Forwarded-For
    header that your external proxy sets on external cluster requests through unmodified.

    To configure the Ingress Controller to set the

    X-Forwarded-For
    header on internal cluster requests, which do not go through the external proxy, specify the
    if-none
    policy. If an HTTP request already has the header set through the external proxy, then the Ingress Controller preserves it. If the header is absent because the request did not come through the proxy, then the Ingress Controller adds the header.

As an application developer, you can:

  • Configure an application-specific external proxy that injects the

    X-Forwarded-For
    header.

    To configure an Ingress Controller to pass the header through unmodified for an application’s Route, without affecting the policy for other Routes, add an annotation

    haproxy.router.openshift.io/set-forwarded-headers: if-none
    or
    haproxy.router.openshift.io/set-forwarded-headers: never
    on the Route for the application.

    Note

    You can set the

    haproxy.router.openshift.io/set-forwarded-headers
    annotation on a per route basis, independent from the globally set value for the Ingress Controller.

6.8.13. Enabling HTTP/2 Ingress connectivity

You can enable transparent end-to-end HTTP/2 connectivity in HAProxy. It allows application owners to make use of HTTP/2 protocol capabilities, including single connection, header compression, binary streams, and more.

You can enable HTTP/2 connectivity for an individual Ingress Controller or for the entire cluster.

To enable the use of HTTP/2 for the connection from the client to HAProxy, a route must specify a custom certificate. A route that uses the default certificate cannot use HTTP/2. This restriction is necessary to avoid problems from connection coalescing, where the client re-uses a connection for different routes that use the same certificate.

The connection from HAProxy to the application pod can use HTTP/2 only for re-encrypt routes and not for edge-terminated or insecure routes. This restriction is because HAProxy uses Application-Level Protocol Negotiation (ALPN), which is a TLS extension, to negotiate the use of HTTP/2 with the back-end. The implication is that end-to-end HTTP/2 is possible with passthrough and re-encrypt and not with insecure or edge-terminated routes.

Warning

Using WebSockets with a re-encrypt route and with HTTP/2 enabled on an Ingress Controller requires WebSocket support over HTTP/2. WebSockets over HTTP/2 is a feature of HAProxy 2.4, which is unsupported in OpenShift Container Platform at this time.

Important

For non-passthrough routes, the Ingress Controller negotiates its connection to the application independently of the connection from the client. This means a client may connect to the Ingress Controller and negotiate HTTP/1.1, and the Ingress Controller may then connect to the application, negotiate HTTP/2, and forward the request from the client HTTP/1.1 connection using the HTTP/2 connection to the application. This poses a problem if the client subsequently tries to upgrade its connection from HTTP/1.1 to the WebSocket protocol, because the Ingress Controller cannot forward WebSocket to HTTP/2 and cannot upgrade its HTTP/2 connection to WebSocket. Consequently, if you have an application that is intended to accept WebSocket connections, it must not allow negotiating the HTTP/2 protocol or else clients will fail to upgrade to the WebSocket protocol.

Procedure

Enable HTTP/2 on a single Ingress Controller.

  • To enable HTTP/2 on an Ingress Controller, enter the

    oc annotate
    command:

    $ oc -n openshift-ingress-operator annotate ingresscontrollers/<ingresscontroller_name> ingress.operator.openshift.io/default-enable-http2=true

    Replace

    <ingresscontroller_name>
    with the name of the Ingress Controller to annotate.

Enable HTTP/2 on the entire cluster.

  • To enable HTTP/2 for the entire cluster, enter the

    oc annotate
    command:

    $ oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=true
    Tip

    You can alternatively apply the following YAML to add the annotation:

    apiVersion: config.openshift.io/v1
    kind: Ingress
    metadata:
      name: cluster
      annotations:
        ingress.operator.openshift.io/default-enable-http2: "true"

6.8.14. Configuring the PROXY protocol for an Ingress Controller

A cluster administrator can configure the PROXY protocol when an Ingress Controller uses either the

HostNetwork
or
NodePortService
endpoint publishing strategy types. The PROXY protocol enables the load balancer to preserve the original client addresses for connections that the Ingress Controller receives. The original client addresses are useful for logging, filtering, and injecting HTTP headers. In the default configuration, the connections that the Ingress Controller receives only contain the source address that is associated with the load balancer.

This feature is not supported in cloud deployments. This restriction is because when OpenShift Container Platform runs in a cloud platform, and an IngressController specifies that a service load balancer should be used, the Ingress Operator configures the load balancer service and enables the PROXY protocol based on the platform requirement for preserving source addresses.

Important

You must configure both OpenShift Container Platform and the external load balancer to either use the PROXY protocol or to use TCP.

Warning

The PROXY protocol is unsupported for the default Ingress Controller with installer-provisioned clusters on non-cloud platforms that use a Keepalived Ingress VIP.

Prerequisites

  • You created an Ingress Controller.

Procedure

  1. Edit the Ingress Controller resource:

    $ oc -n openshift-ingress-operator edit ingresscontroller/default
  2. Set the PROXY configuration:

    • If your Ingress Controller uses the hostNetwork endpoint publishing strategy type, set the

      spec.endpointPublishingStrategy.hostNetwork.protocol
      subfield to
      PROXY
      :

      Sample hostNetwork configuration to PROXY

        spec:
          endpointPublishingStrategy:
            hostNetwork:
              protocol: PROXY
            type: HostNetwork

    • If your Ingress Controller uses the NodePortService endpoint publishing strategy type, set the

      spec.endpointPublishingStrategy.nodePort.protocol
      subfield to
      PROXY
      :

      Sample nodePort configuration to PROXY

        spec:
          endpointPublishingStrategy:
            nodePort:
              protocol: PROXY
            type: NodePortService

As a cluster administrator, you can specify an alternative to the default cluster domain for user-created routes by configuring the

appsDomain
field. The
appsDomain
field is an optional domain for OpenShift Container Platform to use instead of the default, which is specified in the
domain
field. If you specify an alternative domain, it overrides the default cluster domain for the purpose of determining the default host for a new route.

For example, you can use the DNS domain for your company as the default domain for routes and ingresses for applications running on your cluster.

Prerequisites

  • You deployed an OpenShift Container Platform cluster.
  • You installed the
    oc
    command line interface.

Procedure

  1. Configure the

    appsDomain
    field by specifying an alternative default domain for user-created routes.

    1. Edit the ingress

      cluster
      resource:

      $ oc edit ingresses.config/cluster -o yaml
    2. Edit the YAML file:

      Sample appsDomain configuration to test.example.com

      apiVersion: config.openshift.io/v1
      kind: Ingress
      metadata:
        name: cluster
      spec:
        domain: apps.example.com            
      1
      
        appsDomain: <test.example.com>      
      2

      1
      Specifies the default domain. You cannot modify the default domain after installation.
      2
      Optional: Domain for OpenShift Container Platform infrastructure to use for application routes. Instead of the default prefix, apps, you can use an alternative prefix like test.
  2. Verify that an existing route contains the domain name specified in the

    appsDomain
    field by exposing the route and verifying the route domain change:

    Note

    Wait for the

    openshift-apiserver
    finish rolling updates before exposing the route.

    1. Expose the route:

      $ oc expose service hello-openshift
      route.route.openshift.io/hello-openshift exposed

      Example output:

      $ oc get routes
      NAME              HOST/PORT                                   PATH   SERVICES          PORT       TERMINATION   WILDCARD
      hello-openshift   hello_openshift-<my_project>.test.example.com
      hello-openshift   8080-tcp                 None

6.8.16. Converting HTTP header case

HAProxy 2.2 lowercases HTTP header names by default, for example, changing

Host: xyz.com
to
host: xyz.com
. If legacy applications are sensitive to the capitalization of HTTP header names, use the Ingress Controller
spec.httpHeaders.headerNameCaseAdjustments
API field for a solution to accommodate legacy applications until they can be fixed.

Important

Because OpenShift Container Platform 4.8 includes HAProxy 2.2, make sure to add the necessary configuration by using

spec.httpHeaders.headerNameCaseAdjustments
before upgrading.

Prerequisites

  • You have installed the OpenShift CLI (
    oc
    ).
  • You have access to the cluster as a user with the
    cluster-admin
    role.

Procedure

As a cluster administrator, you can convert the HTTP header case by entering the

oc patch
command or by setting the
HeaderNameCaseAdjustments
field in the Ingress Controller YAML file.

  • Specify an HTTP header to be capitalized by entering the

    oc patch
    command.

    1. Enter the

      oc patch
      command to change the HTTP
      host
      header to
      Host
      :

      $ oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{"spec":{"httpHeaders":{"headerNameCaseAdjustments":["Host"]}}}'
    2. Annotate the route of the application:

      $ oc annotate routes/my-application haproxy.router.openshift.io/h1-adjust-case=true

      The Ingress Controller then adjusts the

      host
      request header as specified.

  • Specify adjustments using the

    HeaderNameCaseAdjustments
    field by configuring the Ingress Controller YAML file.

    1. The following example Ingress Controller YAML adjusts the

      host
      header to
      Host
      for HTTP/1 requests to appropriately annotated routes:

      Example Ingress Controller YAML

      apiVersion: operator.openshift.io/v1
      kind: IngressController
      metadata:
        name: default
        namespace: openshift-ingress-operator
      spec:
        httpHeaders:
          headerNameCaseAdjustments:
          - Host

    2. The following example route enables HTTP response header name case adjustments using the

      haproxy.router.openshift.io/h1-adjust-case
      annotation:

      Example route YAML

      apiVersion: route.openshift.io/v1
      kind: Route
      metadata:
        annotations:
          haproxy.router.openshift.io/h1-adjust-case: true 
      1
      
        name: my-application
        namespace: my-application
      spec:
        to:
          kind: Service
          name: my-application

      1
      Set haproxy.router.openshift.io/h1-adjust-case to true.
Red Hat logoGithubredditYoutubeTwitter

Aprender

Pruebe, compre y venda

Comunidades

Acerca de la documentación de Red Hat

Ayudamos a los usuarios de Red Hat a innovar y alcanzar sus objetivos con nuestros productos y servicios con contenido en el que pueden confiar. Explore nuestras recientes actualizaciones.

Hacer que el código abierto sea más inclusivo

Red Hat se compromete a reemplazar el lenguaje problemático en nuestro código, documentación y propiedades web. Para más detalles, consulte el Blog de Red Hat.

Acerca de Red Hat

Ofrecemos soluciones reforzadas que facilitan a las empresas trabajar en plataformas y entornos, desde el centro de datos central hasta el perímetro de la red.

Theme

© 2026 Red Hat
Volver arriba