Chapter 2. Networking Operators
2.1. AWS Load Balancer Operator
The AWS Load Balancer Operator is an Operator supported by Red Hat that users can optionally install on SRE-managed Red Hat OpenShift Service on AWS (ROSA) clusters. The AWS Load Balancer Operator manages the lifecycle of the AWS Load Balancer Controller that provisions AWS Elastic Load Balancing v2 (ELBv2) services for applications running in ROSA clusters.
Load Balancers created by the AWS Load Balancer Operator cannot be used for OpenShift Routes, and should only be used for individual services or ingress resources that do not need the full layer 7 capabilities of an OpenShift Route.
The AWS Load Balancer Controller manages AWS Elastic Load Balancers for a Red Hat OpenShift Service on AWS (ROSA) cluster. The controller provisions AWS Application Load Balancers (ALB) when you create Kubernetes Ingress resources and AWS Network Load Balancers (NLB) when implementing Kubernetes Service resources with a type of LoadBalancer.
Compared with the default AWS in-tree load balancer provider, this controller is developed with advanced annotations for both ALBs and NLBs. Some advanced use cases are:
- Using native Kubernetes Ingress objects with ALBs
- Integrate ALBs with the AWS Web Application Firewall (WAF) service
- Specify custom NLB source IP ranges
- Specify custom NLB internal IP addresses
The AWS Load Balancer Operator is used to used to install, manage and configure an instance of aws-load-balancer-controller
in a ROSA cluster.
2.1.1. Setting up your environment to install the AWS Load Balancer Operator
The AWS Load Balancer Operator requires a cluster with multiple availiability zones (AZ), as well as three public subnets split across three AZs in the same virtual private cloud (VPC) as the cluster.
Because of these requirements, the AWS Load Balancer Operator maybe be unsuitable for many PrivateLink clusters. AWS NLBs do not have this restriction.
Before installing the AWS Load Balancer Operator, you must have configured the following:
- A ROSA (classic architecture) cluster with multiple availability zones
- BYO VPC cluster
- AWS CLI
- OC CLI
2.1.1.1. AWS Load Balancer Operator environment set up
Optional: You can set up temporary environment variables to streamline your installation commands.
If you decide not to use environmental variables, manually enter the values where prompted in the code snippets.
Procedure
After logging into your cluster as an admin user, run the following commands:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export CLUSTER_NAME=$(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}") export REGION=$(oc get infrastructure cluster -o=jsonpath="{.status.platformStatus.aws.region}") export OIDC_ENDPOINT=$(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) export SCRATCH="/tmp/${CLUSTER_NAME}/alb-operator" mkdir -p ${SCRATCH}
$ export CLUSTER_NAME=$(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}") $ export REGION=$(oc get infrastructure cluster -o=jsonpath="{.status.platformStatus.aws.region}") $ export OIDC_ENDPOINT=$(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') $ export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) $ export SCRATCH="/tmp/${CLUSTER_NAME}/alb-operator" $ mkdir -p ${SCRATCH}
You can verify that the variables are set by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow echo "Cluster name: ${CLUSTER_NAME}, Region: ${REGION}, OIDC Endpoint: ${OIDC_ENDPOINT}, AWS Account ID: ${AWS_ACCOUNT_ID}"
$ echo "Cluster name: ${CLUSTER_NAME}, Region: ${REGION}, OIDC Endpoint: ${OIDC_ENDPOINT}, AWS Account ID: ${AWS_ACCOUNT_ID}"
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Cluster name: <cluster_id>, Region: us-east-2, OIDC Endpoint: oidc.op1.openshiftapps.com/<oidc_id>, AWS Account ID: <aws_id>
Cluster name: <cluster_id>, Region: us-east-2, OIDC Endpoint: oidc.op1.openshiftapps.com/<oidc_id>, AWS Account ID: <aws_id>
2.1.1.2. AWS VPC and subnets
Before you can install the AWS Load Balancer Operator, you must tag your AWS VPC resources.
Procedure
Set the environmental variables to the proper values for your ROSA deployment:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export VPC_ID=<vpc-id> export PUBLIC_SUBNET_IDS="<public-subnet-a-id> <public-subnet-b-id> <public-subnet-c-id>" export PRIVATE_SUBNET_IDS="<private-subnet-a-id> <private-subnet-b-id> <private-subnet-c-id>"
$ export VPC_ID=<vpc-id> $ export PUBLIC_SUBNET_IDS="<public-subnet-a-id> <public-subnet-b-id> <public-subnet-c-id>" $ export PRIVATE_SUBNET_IDS="<private-subnet-a-id> <private-subnet-b-id> <private-subnet-c-id>"
Add a tag to your cluster’s VPC with the cluster name:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow aws ec2 create-tags --resources ${VPC_ID} --tags Key=kubernetes.io/cluster/${CLUSTER_NAME},Value=owned --region ${REGION}
$ aws ec2 create-tags --resources ${VPC_ID} --tags Key=kubernetes.io/cluster/${CLUSTER_NAME},Value=owned --region ${REGION}
Add a tag to your public subnets:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow aws ec2 create-tags \ --resources ${PUBLIC_SUBNET_IDS} \ --tags Key=kubernetes.io/role/elb,Value='' \ --region ${REGION}
$ aws ec2 create-tags \ --resources ${PUBLIC_SUBNET_IDS} \ --tags Key=kubernetes.io/role/elb,Value='' \ --region ${REGION}
Add a tag to your private subnets:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow aws ec2 create-tags \ --resources ${PRIVATE_SUBNET_IDS} \ --tags Key=kubernetes.io/role/internal-elb,Value='' \ --region ${REGION}
$ aws ec2 create-tags \ --resources ${PRIVATE_SUBNET_IDS} \ --tags Key=kubernetes.io/role/internal-elb,Value='' \ --region ${REGION}
- To set up a ROSA classic cluster with multiple availability zones, see Creating a ROSA cluster with STS using the default options
2.1.2. Installing the AWS Load Balancer Operator
After setting up your environment with your cluster, you can install the AWS Load Balancer Operator using the CLI.
Procedure
Create a new project within your cluster for the AWS Load Balancer Operator:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc new-project aws-load-balancer-operator
$ oc new-project aws-load-balancer-operator
Create an AWS IAM policy for the AWS Load Balancer Controller:
NoteYou can find the AWS IAM policy from the upstream AWS Load Balancer Controller policy. This policy includes all of the permissions you needed by the Operator to function.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow POLICY_ARN=$(aws iam list-policies --query \ "Policies[?PolicyName=='aws-load-balancer-operator-policy'].{ARN:Arn}" \ --output text) if [[ -z "${POLICY_ARN}" ]]; then echo $POLICY_ARN
$ POLICY_ARN=$(aws iam list-policies --query \ "Policies[?PolicyName=='aws-load-balancer-operator-policy'].{ARN:Arn}" \ --output text) $ if [[ -z "${POLICY_ARN}" ]]; then wget -O "${SCRATCH}/load-balancer-operator-policy.json" \ https://raw.githubusercontent.com/rh-mobb/documentation/main/content/rosa/aws-load-balancer-operator/load-balancer-operator-policy.json POLICY_ARN=$(aws --region "$REGION" --query Policy.Arn \ --output text iam create-policy \ --policy-name aws-load-balancer-operator-policy \ --policy-document "file://${SCRATCH}/load-balancer-operator-policy.json") fi $ echo $POLICY_ARN
Create an AWS IAM trust policy for AWS Load Balancer Operator:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat <<EOF > "${SCRATCH}/trust-policy.json" { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Condition": { "StringEquals" : { "${OIDC_ENDPOINT}:sub": ["system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-operator-controller-manager", "system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-controller-cluster"] } }, "Principal": { "Federated": "arn:aws:iam::$AWS_ACCOUNT_ID:oidc-provider/${OIDC_ENDPOINT}" }, "Action": "sts:AssumeRoleWithWebIdentity" } ] } EOF
$ cat <<EOF > "${SCRATCH}/trust-policy.json" { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Condition": { "StringEquals" : { "${OIDC_ENDPOINT}:sub": ["system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-operator-controller-manager", "system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-controller-cluster"] } }, "Principal": { "Federated": "arn:aws:iam::$AWS_ACCOUNT_ID:oidc-provider/${OIDC_ENDPOINT}" }, "Action": "sts:AssumeRoleWithWebIdentity" } ] } EOF
Create an AWS IAM role for the AWS Load Balancer Operator:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ROLE_ARN=$(aws iam create-role --role-name "${ROSA_CLUSTER_NAME}-alb-operator" \ --assume-role-policy-document "file://${SCRATCH}/trust-policy.json" \ --query Role.Arn --output text) echo $ROLE_ARN aws iam attach-role-policy --role-name "${ROSA_CLUSTER_NAME}-alb-operator" \ --policy-arn $POLICY_ARN
$ ROLE_ARN=$(aws iam create-role --role-name "${ROSA_CLUSTER_NAME}-alb-operator" \ --assume-role-policy-document "file://${SCRATCH}/trust-policy.json" \ --query Role.Arn --output text) $ echo $ROLE_ARN $ aws iam attach-role-policy --role-name "${ROSA_CLUSTER_NAME}-alb-operator" \ --policy-arn $POLICY_ARN
Create a secret for the AWS Load Balancer Operator to assume our newly created AWS IAM role:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: aws-load-balancer-operator namespace: aws-load-balancer-operator stringData: credentials: | [default] role_arn = $ROLE_ARN web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token EOF
$ cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: aws-load-balancer-operator namespace: aws-load-balancer-operator stringData: credentials: | [default] role_arn = $ROLE_ARN web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token EOF
Install the AWS Load Balancer Operator:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: aws-load-balancer-operator namespace: aws-load-balancer-operator spec: upgradeStrategy: Default --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: aws-load-balancer-operator namespace: aws-load-balancer-operator spec: channel: stable-v1.0 installPlanApproval: Automatic name: aws-load-balancer-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: aws-load-balancer-operator.v1.0.0 EOF
$ cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: aws-load-balancer-operator namespace: aws-load-balancer-operator spec: upgradeStrategy: Default --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: aws-load-balancer-operator namespace: aws-load-balancer-operator spec: channel: stable-v1.0 installPlanApproval: Automatic name: aws-load-balancer-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: aws-load-balancer-operator.v1.0.0 EOF
Deploy an instance of the AWS Load Balancer Controller using the Operator:
NoteIf you get an error here wait a minute and try again, it means the Operator has not completed installing yet.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat << EOF | oc apply -f - apiVersion: networking.olm.openshift.io/v1 kind: AWSLoadBalancerController metadata: name: cluster spec: credentials: name: aws-load-balancer-operator EOF
$ cat << EOF | oc apply -f - apiVersion: networking.olm.openshift.io/v1 kind: AWSLoadBalancerController metadata: name: cluster spec: credentials: name: aws-load-balancer-operator EOF
Check the that the Operator and controller pods are both running:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc -n aws-load-balancer-operator get pods
$ oc -n aws-load-balancer-operator get pods
You should see the following, if not wait a moment and retry:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NAME READY STATUS RESTARTS AGE aws-load-balancer-controller-cluster-6ddf658785-pdp5d 1/1 Running 0 99s aws-load-balancer-operator-controller-manager-577d9ffcb9-w6zqn 2/2 Running 0 2m4s
NAME READY STATUS RESTARTS AGE aws-load-balancer-controller-cluster-6ddf658785-pdp5d 1/1 Running 0 99s aws-load-balancer-operator-controller-manager-577d9ffcb9-w6zqn 2/2 Running 0 2m4s
2.1.2.1. Validating the deployment
Create a new project:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc new-project hello-world
$ oc new-project hello-world
Deploy a hello world application:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc new-app -n hello-world --image=docker.io/openshift/hello-openshift
$ oc new-app -n hello-world --image=docker.io/openshift/hello-openshift
Configure a NodePort service for the AWS ALB to connect to:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat << EOF | oc apply -f - apiVersion: v1 kind: Service metadata: name: hello-openshift-nodeport namespace: hello-world spec: ports: - port: 80 targetPort: 8080 protocol: TCP type: NodePort selector: deployment: hello-openshift EOF
$ cat << EOF | oc apply -f - apiVersion: v1 kind: Service metadata: name: hello-openshift-nodeport namespace: hello-world spec: ports: - port: 80 targetPort: 8080 protocol: TCP type: NodePort selector: deployment: hello-openshift EOF
Deploy an AWS ALB using the AWS Load Balancer Operator:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat << EOF | oc apply -f - apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: hello-openshift-alb namespace: hello-world annotations: alb.ingress.kubernetes.io/scheme: internet-facing spec: ingressClassName: alb rules: - http: paths: - path: / pathType: Exact backend: service: name: hello-openshift-nodeport port: number: 80 EOF
$ cat << EOF | oc apply -f - apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: hello-openshift-alb namespace: hello-world annotations: alb.ingress.kubernetes.io/scheme: internet-facing spec: ingressClassName: alb rules: - http: paths: - path: / pathType: Exact backend: service: name: hello-openshift-nodeport port: number: 80 EOF
Curl the AWS ALB Ingress endpoint to verify the hello world application is accessible:
NoteAWS ALB provisioning takes a few minutes. If you receive an error that says
curl: (6) Could not resolve host
, please wait and try again.Copy to Clipboard Copied! Toggle word wrap Toggle overflow INGRESS=$(oc -n hello-world get ingress hello-openshift-alb \ -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') curl "http://${INGRESS}"
$ INGRESS=$(oc -n hello-world get ingress hello-openshift-alb \ -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') $ curl "http://${INGRESS}"
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Hello OpenShift!
Hello OpenShift!
Deploy an AWS NLB for your hello world application:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat << EOF | oc apply -f - apiVersion: v1 kind: Service metadata: name: hello-openshift-nlb namespace: hello-world annotations: service.beta.kubernetes.io/aws-load-balancer-type: external service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing spec: ports: - port: 80 targetPort: 8080 protocol: TCP type: LoadBalancer selector: deployment: hello-openshift EOF
$ cat << EOF | oc apply -f - apiVersion: v1 kind: Service metadata: name: hello-openshift-nlb namespace: hello-world annotations: service.beta.kubernetes.io/aws-load-balancer-type: external service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing spec: ports: - port: 80 targetPort: 8080 protocol: TCP type: LoadBalancer selector: deployment: hello-openshift EOF
Test the AWS NLB endpoint:
NoteNLB provisioning takes a few minutes. If you receive an error that says
curl: (6) Could not resolve host
, please wait and try again.Copy to Clipboard Copied! Toggle word wrap Toggle overflow NLB=$(oc -n hello-world get service hello-openshift-nlb \ -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') curl "http://${NLB}"
$ NLB=$(oc -n hello-world get service hello-openshift-nlb \ -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') $ curl "http://${NLB}"
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Hello OpenShift!
Hello OpenShift!
2.1.3. Deleting the example AWS Load Balancer Operator installation
Delete the hello world application namespace (and all the resources in the namespace):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete project hello-world
$ oc delete project hello-world
Delete the AWS Load Balancer Operator and the AWS IAM roles:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete subscription aws-load-balancer-operator -n aws-load-balancer-operator aws iam detach-role-policy \ --role-name "${ROSA_CLUSTER_NAME}-alb-operator" \ --policy-arn $POLICY_ARN aws iam delete-role \ --role-name "${ROSA_CLUSTER_NAME}-alb-operator"
$ oc delete subscription aws-load-balancer-operator -n aws-load-balancer-operator $ aws iam detach-role-policy \ --role-name "${ROSA_CLUSTER_NAME}-alb-operator" \ --policy-arn $POLICY_ARN $ aws iam delete-role \ --role-name "${ROSA_CLUSTER_NAME}-alb-operator"
Delete the AWS IAM policy:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow aws iam delete-policy --policy-arn $POLICY_ARN
$ aws iam delete-policy --policy-arn $POLICY_ARN
2.2. DNS Operator in Red Hat OpenShift Service on AWS
In Red Hat OpenShift Service on AWS, the DNS Operator deploys and manages a CoreDNS instance to provide a name resolution service to pods inside the cluster, enables DNS-based Kubernetes Service discovery, and resolves internal cluster.local
names.
2.2.1. Using DNS forwarding
You can use DNS forwarding to override the default forwarding configuration in the /etc/resolv.conf
file in the following ways:
Specify name servers (
spec.servers
) for every zone. If the forwarded zone is the ingress domain managed by Red Hat OpenShift Service on AWS, then the upstream name server must be authorized for the domain.ImportantYou must specify at least one zone. Otherwise, your cluster can lose functionality.
-
Provide a list of upstream DNS servers (
spec.upstreamResolvers
). - Change the default forwarding policy.
A DNS forwarding configuration for the default domain can have both the default servers specified in the /etc/resolv.conf
file and the upstream DNS servers.
Procedure
Modify the DNS Operator object named
default
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc edit dns.operator/default
$ oc edit dns.operator/default
After you issue the previous command, the Operator creates and updates the config map named
dns-default
with additional server configuration blocks based onspec.servers
.ImportantWhen specifying values for the
zones
parameter, ensure that you only forward to specific zones, such as your intranet. You must specify at least one zone. Otherwise, your cluster can lose functionality.If none of the servers have a zone that matches the query, then name resolution falls back to the upstream DNS servers.
Configuring DNS forwarding
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: operator.openshift.io/v1 kind: DNS metadata: name: default spec: cache: negativeTTL: 0s positiveTTL: 0s logLevel: Normal nodePlacement: {} operatorLogLevel: Normal servers: - name: example-server zones: - example.com forwardPlugin: policy: Random upstreams: - 1.1.1.1 - 2.2.2.2:5353 upstreamResolvers: policy: Random protocolStrategy: "" transportConfig: {} upstreams: - type: SystemResolvConf - type: Network address: 1.2.3.4 port: 53 status: clusterDomain: cluster.local clusterIP: x.y.z.10 conditions: ...
apiVersion: operator.openshift.io/v1 kind: DNS metadata: name: default spec: cache: negativeTTL: 0s positiveTTL: 0s logLevel: Normal nodePlacement: {} operatorLogLevel: Normal servers: - name: example-server
1 zones: - example.com
2 forwardPlugin: policy: Random
3 upstreams:
4 - 1.1.1.1 - 2.2.2.2:5353 upstreamResolvers:
5 policy: Random
6 protocolStrategy: ""
7 transportConfig: {}
8 upstreams: - type: SystemResolvConf
9 - type: Network address: 1.2.3.4
10 port: 53
11 status: clusterDomain: cluster.local clusterIP: x.y.z.10 conditions: ...
- 1
- Must comply with the
rfc6335
service name syntax. - 2
- Must conform to the definition of a subdomain in the
rfc1123
service name syntax. The cluster domain,cluster.local
, is an invalid subdomain for thezones
field. - 3
- Defines the policy to select upstream resolvers listed in the
forwardPlugin
. Default value isRandom
. You can also use the valuesRoundRobin
, andSequential
. - 4
- A maximum of 15
upstreams
is allowed perforwardPlugin
. - 5
- You can use
upstreamResolvers
to override the default forwarding policy and forward DNS resolution to the specified DNS resolvers (upstream resolvers) for the default domain. If you do not provide any upstream resolvers, the DNS name queries go to the servers declared in/etc/resolv.conf
. - 6
- Determines the order in which upstream servers listed in
upstreams
are selected for querying. You can specify one of these values:Random
,RoundRobin
, orSequential
. The default value isSequential
. - 7
- When omitted, the platform chooses a default, normally the protocol of the original client request. Set to
TCP
to specify that the platform should use TCP for all upstream DNS requests, even if the client request uses UDP. - 8
- Used to configure the transport type, server name, and optional custom CA or CA bundle to use when forwarding DNS requests to an upstream resolver.
- 9
- You can specify two types of
upstreams
:SystemResolvConf
orNetwork
.SystemResolvConf
configures the upstream to use/etc/resolv.conf
andNetwork
defines aNetworkresolver
. You can specify one or both. - 10
- If the specified type is
Network
, you must provide an IP address. Theaddress
field must be a valid IPv4 or IPv6 address. - 11
- If the specified type is
Network
, you can optionally provide a port. Theport
field must have a value between1
and65535
. If you do not specify a port for the upstream, the default port is 853.
Additional resources
- For more information on DNS forwarding, see the CoreDNS forward documentation.
2.3. Ingress Operator in Red Hat OpenShift Service on AWS
The Ingress Operator implements the IngressController
API and is the component responsible for enabling external access to Red Hat OpenShift Service on AWS cluster services.
2.3.1. Red Hat OpenShift Service on AWS Ingress Operator
When you create your Red Hat OpenShift Service on AWS cluster, pods and services running on the cluster are each allocated their own IP addresses. The IP addresses are accessible to other pods and services running nearby but are not accessible to outside clients.
The Ingress Operator makes it possible for external clients to access your service by deploying and managing one or more HAProxy-based Ingress Controllers to handle routing. Red Hat Site Reliability Engineers (SRE) manage the Ingress Operator for Red Hat OpenShift Service on AWS clusters. While you cannot alter the settings for the Ingress Operator, you may view the default Ingress Controller configurations, status, and logs as well as the Ingress Operator status.
2.3.2. The Ingress configuration asset
The installation program generates an asset with an Ingress
resource in the config.openshift.io
API group, cluster-ingress-02-config.yml
.
YAML Definition of the Ingress
resource
apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: apps.openshiftdemos.com
apiVersion: config.openshift.io/v1
kind: Ingress
metadata:
name: cluster
spec:
domain: apps.openshiftdemos.com
The installation program stores this asset in the cluster-ingress-02-config.yml
file in the manifests/
directory. This Ingress
resource defines the cluster-wide configuration for Ingress. This Ingress configuration is used as follows:
- The Ingress Operator uses the domain from the cluster Ingress configuration as the domain for the default Ingress Controller.
-
The OpenShift API Server Operator uses the domain from the cluster Ingress configuration. This domain is also used when generating a default host for a
Route
resource that does not specify an explicit host.
2.3.3. Ingress Controller configuration parameters
The IngressController
custom resource (CR) includes optional configuration parameters that you can configure to meet specific needs for your organization.
Parameter | Description |
---|---|
|
The
If empty, the default value is |
|
|
|
For cloud environments, use the
You can configure the following
If not set, the default value is based on
|
|
The
The secret must contain the following keys and data: *
If not set, a wildcard certificate is automatically generated and used. The certificate is valid for the Ingress Controller The in-use certificate, whether generated or user-specified, is automatically integrated with Red Hat OpenShift Service on AWS built-in OAuth server. |
|
|
|
|
|
If not set, the defaults values are used. Note
The nodePlacement: nodeSelector: matchLabels: kubernetes.io/os: linux tolerations: - effect: NoSchedule operator: Exists
|
|
If not set, the default value is based on the
When using the
The minimum TLS version for Ingress Controllers is Note
Ciphers and the minimum TLS version of the configured security profile are reflected in the Important
The Ingress Operator converts the TLS |
|
The
The |
|
|
|
|
|
By setting the
By default, the policy is set to
By setting These adjustments are only applied to cleartext, edge-terminated, and re-encrypt routes, and only when using HTTP/1.
For request headers, these adjustments are applied only for routes that have the
|
|
|
|
|
|
For any cookie that you want to capture, the following parameters must be in your
For example: httpCaptureCookies: - matchType: Exact maxLength: 128 name: MYCOOKIE
|
|
httpCaptureHeaders: request: - maxLength: 256 name: Connection - maxLength: 128 name: User-Agent response: - maxLength: 256 name: Content-Type - maxLength: 256 name: Content-Length
|
|
|
|
The
|
|
The
These connections come from load balancer health probes or web browser speculative connections (preconnect) and can be safely ignored. However, these requests can be caused by network errors, so setting this field to |
2.3.3.1. Ingress Controller TLS security profiles
TLS security profiles provide a way for servers to regulate which ciphers a connecting client can use when connecting to the server.
2.3.3.1.1. Understanding TLS security profiles
You can use a TLS (Transport Layer Security) security profile to define which TLS ciphers are required by various Red Hat OpenShift Service on AWS components. The Red Hat OpenShift Service on AWS TLS security profiles are based on Mozilla recommended configurations.
You can specify one of the following TLS security profiles for each component:
Profile | Description |
---|---|
| This profile is intended for use with legacy clients or libraries. The profile is based on the Old backward compatibility recommended configuration.
The Note For the Ingress Controller, the minimum TLS version is converted from 1.0 to 1.1. |
| This profile is the default TLS security profile for the Ingress Controller, kubelet, and control plane. The profile is based on the Intermediate compatibility recommended configuration.
The Note This profile is the recommended configuration for the majority of clients. |
| This profile is intended for use with modern clients that have no need for backwards compatibility. This profile is based on the Modern compatibility recommended configuration.
The |
| This profile allows you to define the TLS version and ciphers to use. Warning
Use caution when using a |
When using one of the predefined profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 might cause a new profile configuration to be applied, resulting in a rollout.
2.3.3.1.2. Configuring the TLS security profile for the Ingress Controller
To configure a TLS security profile for an Ingress Controller, edit the IngressController
custom resource (CR) to specify a predefined or custom TLS security profile. If a TLS security profile is not configured, the default value is based on the TLS security profile set for the API server.
Sample IngressController
CR that configures the Old
TLS security profile
apiVersion: operator.openshift.io/v1 kind: IngressController ... spec: tlsSecurityProfile: old: {} type: Old ...
apiVersion: operator.openshift.io/v1
kind: IngressController
...
spec:
tlsSecurityProfile:
old: {}
type: Old
...
The TLS security profile defines the minimum TLS version and the TLS ciphers for TLS connections for Ingress Controllers.
You can see the ciphers and the minimum TLS version of the configured TLS security profile in the IngressController
custom resource (CR) under Status.Tls Profile
and the configured TLS security profile under Spec.Tls Security Profile
. For the Custom
TLS security profile, the specific ciphers and minimum TLS version are listed under both parameters.
The HAProxy Ingress Controller image supports TLS 1.3
and the Modern
profile.
The Ingress Operator also converts the TLS 1.0
of an Old
or Custom
profile to 1.1
.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role.
Procedure
Edit the
IngressController
CR in theopenshift-ingress-operator
project to configure the TLS security profile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc edit IngressController default -n openshift-ingress-operator
$ oc edit IngressController default -n openshift-ingress-operator
Add the
spec.tlsSecurityProfile
field:Sample
IngressController
CR for aCustom
profileCopy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: operator.openshift.io/v1 kind: IngressController ... spec: tlsSecurityProfile: type: Custom custom: ciphers: - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 ...
apiVersion: operator.openshift.io/v1 kind: IngressController ... spec: tlsSecurityProfile: type: Custom
1 custom:
2 ciphers:
3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 ...
- Save the file to apply the changes.
Verification
Verify that the profile is set in the
IngressController
CR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc describe IngressController default -n openshift-ingress-operator
$ oc describe IngressController default -n openshift-ingress-operator
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Name: default Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: IngressController ... Spec: ... Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom ...
Name: default Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: IngressController ... Spec: ... Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom ...
2.3.3.1.3. Configuring mutual TLS authentication
You can configure the Ingress Controller to enable mutual TLS (mTLS) authentication by setting a spec.clientTLS
value. The clientTLS
value configures the Ingress Controller to verify client certificates. This configuration includes setting a clientCA
value, which is a reference to a config map. The config map contains the PEM-encoded CA certificate bundle that is used to verify a client’s certificate. Optionally, you can also configure a list of certificate subject filters.
If the clientCA
value specifies an X509v3 certificate revocation list (CRL) distribution point, the Ingress Operator downloads and manages a CRL config map based on the HTTP URI X509v3 CRL Distribution Point
specified in each provided certificate. The Ingress Controller uses this config map during mTLS/TLS negotiation. Requests that do not provide valid certificates are rejected.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - You have a PEM-encoded CA certificate bundle.
If your CA bundle references a CRL distribution point, you must have also included the end-entity or leaf certificate to the client CA bundle. This certificate must have included an HTTP URI under
CRL Distribution Points
, as described in RFC 5280. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Issuer: C=US, O=Example Inc, CN=Example Global G2 TLS RSA SHA256 2020 CA1 Subject: SOME SIGNED CERT X509v3 CRL Distribution Points: Full Name: URI:http://crl.example.com/example.crl
Issuer: C=US, O=Example Inc, CN=Example Global G2 TLS RSA SHA256 2020 CA1 Subject: SOME SIGNED CERT X509v3 CRL Distribution Points: Full Name: URI:http://crl.example.com/example.crl
Procedure
In the
openshift-config
namespace, create a config map from your CA bundle:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create configmap \ router-ca-certs-default \ --from-file=ca-bundle.pem=client-ca.crt \ -n openshift-config
$ oc create configmap \ router-ca-certs-default \ --from-file=ca-bundle.pem=client-ca.crt \
1 -n openshift-config
- 1
- The config map data key must be
ca-bundle.pem
, and the data value must be a CA certificate in PEM format.
Edit the
IngressController
resource in theopenshift-ingress-operator
project:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc edit IngressController default -n openshift-ingress-operator
$ oc edit IngressController default -n openshift-ingress-operator
Add the
spec.clientTLS
field and subfields to configure mutual TLS:Sample
IngressController
CR for aclientTLS
profile that specifies filtering patternsCopy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: clientTLS: clientCertificatePolicy: Required clientCA: name: router-ca-certs-default allowedSubjectPatterns: - "^/CN=example.com/ST=NC/C=US/O=Security/OU=OpenShift$"
apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: clientTLS: clientCertificatePolicy: Required clientCA: name: router-ca-certs-default allowedSubjectPatterns: - "^/CN=example.com/ST=NC/C=US/O=Security/OU=OpenShift$"
-
Optional, get the Distinguished Name (DN) for
allowedSubjectPatterns
by entering the following command.
openssl x509 -in custom-cert.pem -noout -subject
$ openssl x509 -in custom-cert.pem -noout -subject
subject= /CN=example.com/ST=NC/C=US/O=Security/OU=OpenShift
2.3.4. View the default Ingress Controller
The Ingress Operator is a core feature of Red Hat OpenShift Service on AWS and is enabled out of the box.
Every new Red Hat OpenShift Service on AWS installation has an ingresscontroller
named default. It can be supplemented with additional Ingress Controllers. If the default ingresscontroller
is deleted, the Ingress Operator will automatically recreate it within a minute.
Procedure
View the default Ingress Controller:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc describe --namespace=openshift-ingress-operator ingresscontroller/default
$ oc describe --namespace=openshift-ingress-operator ingresscontroller/default
2.3.5. View Ingress Operator status
You can view and inspect the status of your Ingress Operator.
Procedure
View your Ingress Operator status:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc describe clusteroperators/ingress
$ oc describe clusteroperators/ingress
2.3.6. View Ingress Controller logs
You can view your Ingress Controller logs.
Procedure
View your Ingress Controller logs:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc logs --namespace=openshift-ingress-operator deployments/ingress-operator -c <container_name>
$ oc logs --namespace=openshift-ingress-operator deployments/ingress-operator -c <container_name>
2.3.7. View Ingress Controller status
Your can view the status of a particular Ingress Controller.
Procedure
View the status of an Ingress Controller:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc describe --namespace=openshift-ingress-operator ingresscontroller/<name>
$ oc describe --namespace=openshift-ingress-operator ingresscontroller/<name>
2.3.8. Creating a custom Ingress Controller
As a cluster administrator, you can create a new custom Ingress Controller. Because the default Ingress Controller might change during Red Hat OpenShift Service on AWS updates, creating a custom Ingress Controller can be helpful when maintaining a configuration manually that persists across cluster updates.
This example provides a minimal spec for a custom Ingress Controller. To further customize your custom Ingress Controller, see "Configuring the Ingress Controller".
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in as a user with
cluster-admin
privileges.
Procedure
Create a YAML file that defines the custom
IngressController
object:Example
custom-ingress-controller.yaml
fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: <custom_name> namespace: openshift-ingress-operator spec: defaultCertificate: name: <custom-ingress-custom-certs> replicas: 1 domain: <custom_domain>
apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: <custom_name>
1 namespace: openshift-ingress-operator spec: defaultCertificate: name: <custom-ingress-custom-certs>
2 replicas: 1
3 domain: <custom_domain>
4 - 1
- Specify the a custom
name
for theIngressController
object. - 2
- Specify the name of the secret with the custom wildcard certificate.
- 3
- Minimum replica needs to be ONE
- 4
- Specify the domain to your domain name. The domain specified on the IngressController object and the domain used for the certificate must match. For example, if the domain value is "custom_domain.mycompany.com", then the certificate must have SAN *.custom_domain.mycompany.com (with the
*.
added to the domain).
Create the object by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -f custom-ingress-controller.yaml
$ oc create -f custom-ingress-controller.yaml
2.3.9. Configuring the Ingress Controller
2.3.9.1. Setting a custom default certificate
As an administrator, you can configure an Ingress Controller to use a custom certificate by creating a Secret resource and editing the IngressController
custom resource (CR).
Prerequisites
- You must have a certificate/key pair in PEM-encoded files, where the certificate is signed by a trusted certificate authority or by a private trusted certificate authority that you configured in a custom PKI.
Your certificate meets the following requirements:
- The certificate is valid for the ingress domain.
-
The certificate uses the
subjectAltName
extension to specify a wildcard domain, such as*.apps.ocp4.example.com
.
You must have an
IngressController
CR. You may use the default one:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc --namespace openshift-ingress-operator get ingresscontrollers
$ oc --namespace openshift-ingress-operator get ingresscontrollers
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NAME AGE default 10m
NAME AGE default 10m
If you have intermediate certificates, they must be included in the tls.crt
file of the secret containing a custom default certificate. Order matters when specifying a certificate; list your intermediate certificate(s) after any server certificate(s).
Procedure
The following assumes that the custom certificate and key pair are in the tls.crt
and tls.key
files in the current working directory. Substitute the actual path names for tls.crt
and tls.key
. You also may substitute another name for custom-certs-default
when creating the Secret resource and referencing it in the IngressController CR.
This action will cause the Ingress Controller to be redeployed, using a rolling deployment strategy.
Create a Secret resource containing the custom certificate in the
openshift-ingress
namespace using thetls.crt
andtls.key
files.Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc --namespace openshift-ingress create secret tls custom-certs-default --cert=tls.crt --key=tls.key
$ oc --namespace openshift-ingress create secret tls custom-certs-default --cert=tls.crt --key=tls.key
Update the IngressController CR to reference the new certificate secret:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc patch --type=merge --namespace openshift-ingress-operator ingresscontrollers/default \ --patch '{"spec":{"defaultCertificate":{"name":"custom-certs-default"}}}'
$ oc patch --type=merge --namespace openshift-ingress-operator ingresscontrollers/default \ --patch '{"spec":{"defaultCertificate":{"name":"custom-certs-default"}}}'
Verify the update was effective:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow echo Q |\ openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null |\ openssl x509 -noout -subject -issuer -enddate
$ echo Q |\ openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null |\ openssl x509 -noout -subject -issuer -enddate
where:
<domain>
- Specifies the base domain name for your cluster.
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow subject=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = *.apps.example.com issuer=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = example.com notAfter=May 10 08:32:45 2022 GM
subject=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = *.apps.example.com issuer=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = example.com notAfter=May 10 08:32:45 2022 GM
TipYou can alternatively apply the following YAML to set a custom default certificate:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: defaultCertificate: name: custom-certs-default
apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: defaultCertificate: name: custom-certs-default
The certificate secret name should match the value used to update the CR.
Once the IngressController CR has been modified, the Ingress Operator updates the Ingress Controller’s deployment to use the custom certificate.
2.3.9.2. Removing a custom default certificate
As an administrator, you can remove a custom certificate that you configured an Ingress Controller to use.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
). - You previously configured a custom default certificate for the Ingress Controller.
Procedure
To remove the custom certificate and restore the certificate that ships with Red Hat OpenShift Service on AWS, enter the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc patch -n openshift-ingress-operator ingresscontrollers/default \ --type json -p $'- op: remove\n path: /spec/defaultCertificate'
$ oc patch -n openshift-ingress-operator ingresscontrollers/default \ --type json -p $'- op: remove\n path: /spec/defaultCertificate'
There can be a delay while the cluster reconciles the new certificate configuration.
Verification
To confirm that the original cluster certificate is restored, enter the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow echo Q | \ openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null | \ openssl x509 -noout -subject -issuer -enddate
$ echo Q | \ openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null | \ openssl x509 -noout -subject -issuer -enddate
where:
<domain>
- Specifies the base domain name for your cluster.
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow subject=CN = *.apps.<domain> issuer=CN = ingress-operator@1620633373 notAfter=May 10 10:44:36 2023 GMT
subject=CN = *.apps.<domain> issuer=CN = ingress-operator@1620633373 notAfter=May 10 10:44:36 2023 GMT
2.3.9.3. Autoscaling an Ingress Controller
You can automatically scale an Ingress Controller to dynamically meet routing performance or availability requirements, such as the requirement to increase throughput.
The following procedure provides an example for scaling up the default Ingress Controller.
Prerequisites
-
You have the OpenShift CLI (
oc
) installed. -
You have access to an Red Hat OpenShift Service on AWS cluster as a user with the
cluster-admin
role. You installed the Custom Metrics Autoscaler Operator and an associated KEDA Controller.
-
You can install the Operator by using OperatorHub on the web console. After you install the Operator, you can create an instance of
KedaController
.
-
You can install the Operator by using OperatorHub on the web console. After you install the Operator, you can create an instance of
Procedure
Create a service account to authenticate with Thanos by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -n openshift-ingress-operator serviceaccount thanos && oc describe -n openshift-ingress-operator serviceaccount thanos
$ oc create -n openshift-ingress-operator serviceaccount thanos && oc describe -n openshift-ingress-operator serviceaccount thanos
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Name: thanos Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> Image pull secrets: thanos-dockercfg-kfvf2 Mountable secrets: thanos-dockercfg-kfvf2 Tokens: <none> Events: <none>
Name: thanos Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> Image pull secrets: thanos-dockercfg-kfvf2 Mountable secrets: thanos-dockercfg-kfvf2 Tokens: <none> Events: <none>
Manually create the service account secret token with the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc apply -f - <<EOF apiVersion: v1 kind: Secret metadata: name: thanos-token namespace: openshift-ingress-operator annotations: kubernetes.io/service-account.name: thanos type: kubernetes.io/service-account-token EOF
$ oc apply -f - <<EOF apiVersion: v1 kind: Secret metadata: name: thanos-token namespace: openshift-ingress-operator annotations: kubernetes.io/service-account.name: thanos type: kubernetes.io/service-account-token EOF
Define a
TriggerAuthentication
object within theopenshift-ingress-operator
namespace by using the service account’s token.Create the
TriggerAuthentication
object and pass the value of thesecret
variable to theTOKEN
parameter:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc apply -f - <<EOF apiVersion: keda.sh/v1alpha1 kind: TriggerAuthentication metadata: name: keda-trigger-auth-prometheus namespace: openshift-ingress-operator spec: secretTargetRef: - parameter: bearerToken name: thanos-token key: token - parameter: ca name: thanos-token key: ca.crt EOF
$ oc apply -f - <<EOF apiVersion: keda.sh/v1alpha1 kind: TriggerAuthentication metadata: name: keda-trigger-auth-prometheus namespace: openshift-ingress-operator spec: secretTargetRef: - parameter: bearerToken name: thanos-token key: token - parameter: ca name: thanos-token key: ca.crt EOF
Create and apply a role for reading metrics from Thanos:
Create a new role,
thanos-metrics-reader.yaml
, that reads metrics from pods and nodes:thanos-metrics-reader.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: thanos-metrics-reader namespace: openshift-ingress-operator rules: - apiGroups: - "" resources: - pods - nodes verbs: - get - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch - apiGroups: - "" resources: - namespaces verbs: - get
apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: thanos-metrics-reader namespace: openshift-ingress-operator rules: - apiGroups: - "" resources: - pods - nodes verbs: - get - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch - apiGroups: - "" resources: - namespaces verbs: - get
Apply the new role by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc apply -f thanos-metrics-reader.yaml
$ oc apply -f thanos-metrics-reader.yaml
Add the new role to the service account by entering the following commands:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc adm policy -n openshift-ingress-operator add-role-to-user thanos-metrics-reader -z thanos --role-namespace=openshift-ingress-operator
$ oc adm policy -n openshift-ingress-operator add-role-to-user thanos-metrics-reader -z thanos --role-namespace=openshift-ingress-operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc adm policy -n openshift-ingress-operator add-cluster-role-to-user cluster-monitoring-view -z thanos
$ oc adm policy -n openshift-ingress-operator add-cluster-role-to-user cluster-monitoring-view -z thanos
NoteThe argument
add-cluster-role-to-user
is only required if you use cross-namespace queries. The following step uses a query from thekube-metrics
namespace which requires this argument.Create a new
ScaledObject
YAML file,ingress-autoscaler.yaml
, that targets the default Ingress Controller deployment:Example
ScaledObject
definitionCopy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: ingress-scaler namespace: openshift-ingress-operator spec: scaleTargetRef: apiVersion: operator.openshift.io/v1 kind: IngressController name: default envSourceContainerName: ingress-operator minReplicaCount: 1 maxReplicaCount: 20 cooldownPeriod: 1 pollingInterval: 1 triggers: - type: prometheus metricType: AverageValue metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 namespace: openshift-ingress-operator metricName: 'kube-node-role' threshold: '1' query: 'sum(kube_node_role{role="worker",service="kube-state-metrics"})' authModes: "bearer" authenticationRef: name: keda-trigger-auth-prometheus
apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: ingress-scaler namespace: openshift-ingress-operator spec: scaleTargetRef:
1 apiVersion: operator.openshift.io/v1 kind: IngressController name: default envSourceContainerName: ingress-operator minReplicaCount: 1 maxReplicaCount: 20
2 cooldownPeriod: 1 pollingInterval: 1 triggers: - type: prometheus metricType: AverageValue metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091
3 namespace: openshift-ingress-operator
4 metricName: 'kube-node-role' threshold: '1' query: 'sum(kube_node_role{role="worker",service="kube-state-metrics"})'
5 authModes: "bearer" authenticationRef: name: keda-trigger-auth-prometheus
- 1
- The custom resource that you are targeting. In this case, the Ingress Controller.
- 2
- Optional: The maximum number of replicas. If you omit this field, the default maximum is set to 100 replicas.
- 3
- The Thanos service endpoint in the
openshift-monitoring
namespace. - 4
- The Ingress Operator namespace.
- 5
- This expression evaluates to however many worker nodes are present in the deployed cluster.
ImportantIf you are using cross-namespace queries, you must target port 9091 and not port 9092 in the
serverAddress
field. You also must have elevated privileges to read metrics from this port.Apply the custom resource definition by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc apply -f ingress-autoscaler.yaml
$ oc apply -f ingress-autoscaler.yaml
Verification
Verify that the default Ingress Controller is scaled out to match the value returned by the
kube-state-metrics
query by running the following commands:Use the
grep
command to search the Ingress Controller YAML file for replicas:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get -n openshift-ingress-operator ingresscontroller/default -o yaml | grep replicas:
$ oc get -n openshift-ingress-operator ingresscontroller/default -o yaml | grep replicas:
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow replicas: 3
replicas: 3
Get the pods in the
openshift-ingress
project:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get pods -n openshift-ingress
$ oc get pods -n openshift-ingress
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NAME READY STATUS RESTARTS AGE router-default-7b5df44ff-l9pmm 2/2 Running 0 17h router-default-7b5df44ff-s5sl5 2/2 Running 0 3d22h router-default-7b5df44ff-wwsth 2/2 Running 0 66s
NAME READY STATUS RESTARTS AGE router-default-7b5df44ff-l9pmm 2/2 Running 0 17h router-default-7b5df44ff-s5sl5 2/2 Running 0 3d22h router-default-7b5df44ff-wwsth 2/2 Running 0 66s
2.3.9.4. Scaling an Ingress Controller
Manually scale an Ingress Controller to meeting routing performance or availability requirements such as the requirement to increase throughput. oc
commands are used to scale the IngressController
resource. The following procedure provides an example for scaling up the default IngressController
.
Scaling is not an immediate action, as it takes time to create the desired number of replicas.
Procedure
View the current number of available replicas for the default
IngressController
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{$.status.availableReplicas}'
$ oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{$.status.availableReplicas}'
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow 2
2
Scale the default
IngressController
to the desired number of replicas using theoc patch
command. The following example scales the defaultIngressController
to 3 replicas:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"replicas": 3}}' --type=merge
$ oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"replicas": 3}}' --type=merge
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ingresscontroller.operator.openshift.io/default patched
ingresscontroller.operator.openshift.io/default patched
Verify that the default
IngressController
scaled to the number of replicas that you specified:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{$.status.availableReplicas}'
$ oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{$.status.availableReplicas}'
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow 3
3
TipYou can alternatively apply the following YAML to scale an Ingress Controller to three replicas:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 3
apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 3
1 - 1
- If you need a different amount of replicas, change the
replicas
value.
2.3.9.5. Configuring Ingress access logging
You can configure the Ingress Controller to enable access logs. If you have clusters that do not receive much traffic, then you can log to a sidecar. If you have high traffic clusters, to avoid exceeding the capacity of the logging stack or to integrate with a logging infrastructure outside of Red Hat OpenShift Service on AWS, you can forward logs to a custom syslog endpoint. You can also specify the format for access logs.
Container logging is useful to enable access logs on low-traffic clusters when there is no existing Syslog logging infrastructure, or for short-term use while diagnosing problems with the Ingress Controller.
Syslog is needed for high-traffic clusters where access logs could exceed the OpenShift Logging stack’s capacity, or for environments where any logging solution needs to integrate with an existing Syslog logging infrastructure. The Syslog use-cases can overlap.
Prerequisites
-
Log in as a user with
cluster-admin
privileges.
Procedure
Configure Ingress access logging to a sidecar.
To configure Ingress access logging, you must specify a destination using
spec.logging.access.destination
. To specify logging to a sidecar container, you must specifyContainer
spec.logging.access.destination.type
. The following example is an Ingress Controller definition that logs to aContainer
destination:Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Container
apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Container
When you configure the Ingress Controller to log to a sidecar, the operator creates a container named
logs
inside the Ingress Controller Pod:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc -n openshift-ingress logs deployment.apps/router-default -c logs
$ oc -n openshift-ingress logs deployment.apps/router-default -c logs
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow 2020-05-11T19:11:50.135710+00:00 router-default-57dfc6cd95-bpmk6 router-default-57dfc6cd95-bpmk6 haproxy[108]: 174.19.21.82:39654 [11/May/2020:19:11:50.133] public be_http:hello-openshift:hello-openshift/pod:hello-openshift:hello-openshift:10.128.2.12:8080 0/0/1/0/1 200 142 - - --NI 1/1/0/0/0 0/0 "GET / HTTP/1.1"
2020-05-11T19:11:50.135710+00:00 router-default-57dfc6cd95-bpmk6 router-default-57dfc6cd95-bpmk6 haproxy[108]: 174.19.21.82:39654 [11/May/2020:19:11:50.133] public be_http:hello-openshift:hello-openshift/pod:hello-openshift:hello-openshift:10.128.2.12:8080 0/0/1/0/1 200 142 - - --NI 1/1/0/0/0 0/0 "GET / HTTP/1.1"
Configure Ingress access logging to a Syslog endpoint.
To configure Ingress access logging, you must specify a destination using
spec.logging.access.destination
. To specify logging to a Syslog endpoint destination, you must specifySyslog
forspec.logging.access.destination.type
. If the destination type isSyslog
, you must also specify a destination endpoint usingspec.logging.access.destination.syslog.address
and you can specify a facility usingspec.logging.access.destination.syslog.facility
. The following example is an Ingress Controller definition that logs to aSyslog
destination:Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 port: 10514
apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 port: 10514
NoteThe
syslog
destination port must be UDP.The
syslog
destination address must be an IP address. It does not support DNS hostname.
Configure Ingress access logging with a specific log format.
You can specify
spec.logging.access.httpLogFormat
to customize the log format. The following example is an Ingress Controller definition that logs to asyslog
endpoint with IP address 1.2.3.4 and port 10514:Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 port: 10514 httpLogFormat: '%ci:%cp [%t] %ft %b/%s %B %bq %HM %HU %HV'
apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 port: 10514 httpLogFormat: '%ci:%cp [%t] %ft %b/%s %B %bq %HM %HU %HV'
Disable Ingress access logging.
To disable Ingress access logging, leave
spec.logging
orspec.logging.access
empty:Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: null
apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: null
Allow the Ingress Controller to modify the HAProxy log length when using a sidecar.
Use
spec.logging.access.destination.syslog.maxLength
if you are usingspec.logging.access.destination.type: Syslog
.Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 maxLength: 4096 port: 10514
apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 maxLength: 4096 port: 10514
Use
spec.logging.access.destination.container.maxLength
if you are usingspec.logging.access.destination.type: Container
.Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Container container: maxLength: 8192
apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Container container: maxLength: 8192
2.3.9.6. Setting Ingress Controller thread count
A cluster administrator can set the thread count to increase the amount of incoming connections a cluster can handle. You can patch an existing Ingress Controller to increase the amount of threads.
Prerequisites
- The following assumes that you already created an Ingress Controller.
Procedure
Update the Ingress Controller to increase the number of threads:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{"spec":{"tuningOptions": {"threadCount": 8}}}'
$ oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{"spec":{"tuningOptions": {"threadCount": 8}}}'
NoteIf you have a node that is capable of running large amounts of resources, you can configure
spec.nodePlacement.nodeSelector
with labels that match the capacity of the intended node, and configurespec.tuningOptions.threadCount
to an appropriately high value.
2.3.9.7. Configuring an Ingress Controller to use an internal load balancer
When creating an Ingress Controller on cloud platforms, the Ingress Controller is published by a public cloud load balancer by default. As an administrator, you can create an Ingress Controller that uses an internal cloud load balancer.
If you want to change the scope
for an IngressController
, you can change the .spec.endpointPublishingStrategy.loadBalancer.scope
parameter after the custom resource (CR) is created.
Figure 2.1. Diagram of LoadBalancer

The preceding graphic shows the following concepts pertaining to Red Hat OpenShift Service on AWS Ingress LoadBalancerService endpoint publishing strategy:
- You can load balance externally, using the cloud provider load balancer, or internally, using the OpenShift Ingress Controller Load Balancer.
- You can use the single IP address of the load balancer and more familiar ports, such as 8080 and 4200 as shown on the cluster depicted in the graphic.
- Traffic from the external load balancer is directed at the pods, and managed by the load balancer, as depicted in the instance of a down node. See the Kubernetes Services documentation for implementation details.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in as a user with
cluster-admin
privileges.
Procedure
Create an
IngressController
custom resource (CR) in a file named<name>-ingress-controller.yaml
, such as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: <name> spec: domain: <domain> endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal
apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: <name>
1 spec: domain: <domain>
2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal
3 Create the Ingress Controller defined in the previous step by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -f <name>-ingress-controller.yaml
$ oc create -f <name>-ingress-controller.yaml
1 - 1
- Replace
<name>
with the name of theIngressController
object.
Optional: Confirm that the Ingress Controller was created by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc --all-namespaces=true get ingresscontrollers
$ oc --all-namespaces=true get ingresscontrollers
2.3.9.8. Setting the Ingress Controller health check interval
A cluster administrator can set the health check interval to define how long the router waits between two consecutive health checks. This value is applied globally as a default for all routes. The default value is 5 seconds.
Prerequisites
- The following assumes that you already created an Ingress Controller.
Procedure
Update the Ingress Controller to change the interval between back end health checks:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{"spec":{"tuningOptions": {"healthCheckInterval": "8s"}}}'
$ oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{"spec":{"tuningOptions": {"healthCheckInterval": "8s"}}}'
NoteTo override the
healthCheckInterval
for a single route, use the route annotationrouter.openshift.io/haproxy.health.check.interval
2.3.9.9. Configuring the default Ingress Controller for your cluster to be internal
You can configure the default
Ingress Controller for your cluster to be internal by deleting and recreating it.
If you want to change the scope
for an IngressController
, you can change the .spec.endpointPublishingStrategy.loadBalancer.scope
parameter after the custom resource (CR) is created.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in as a user with
cluster-admin
privileges.
Procedure
Configure the
default
Ingress Controller for your cluster to be internal by deleting and recreating it.Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOF
$ oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOF
2.3.9.10. Configuring the route admission policy
Administrators and application developers can run applications in multiple namespaces with the same domain name. This is for organizations where multiple teams develop microservices that are exposed on the same hostname.
Allowing claims across namespaces should only be enabled for clusters with trust between namespaces, otherwise a malicious user could take over a hostname. For this reason, the default admission policy disallows hostname claims across namespaces.
Prerequisites
- Cluster administrator privileges.
Procedure
Edit the
.spec.routeAdmission
field of theingresscontroller
resource variable using the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{"spec":{"routeAdmission":{"namespaceOwnership":"InterNamespaceAllowed"}}}' --type=merge
$ oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{"spec":{"routeAdmission":{"namespaceOwnership":"InterNamespaceAllowed"}}}' --type=merge
Sample Ingress Controller configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed ...
spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed ...
TipYou can alternatively apply the following YAML to configure the route admission policy:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed
apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed
2.3.9.11. Using wildcard routes
The HAProxy Ingress Controller has support for wildcard routes. The Ingress Operator uses wildcardPolicy
to configure the ROUTER_ALLOW_WILDCARD_ROUTES
environment variable of the Ingress Controller.
The default behavior of the Ingress Controller is to admit routes with a wildcard policy of None
, which is backwards compatible with existing IngressController
resources.
Procedure
Configure the wildcard policy.
Use the following command to edit the
IngressController
resource:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc edit IngressController
$ oc edit IngressController
Under
spec
, set thewildcardPolicy
field toWildcardsDisallowed
orWildcardsAllowed
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow spec: routeAdmission: wildcardPolicy: WildcardsDisallowed # or WildcardsAllowed
spec: routeAdmission: wildcardPolicy: WildcardsDisallowed # or WildcardsAllowed
2.3.9.12. HTTP header configuration
Red Hat OpenShift Service on AWS provides different methods for working with HTTP headers. When setting or deleting headers, you can use specific fields in the Ingress Controller or an individual route to modify request and response headers. You can also set certain headers by using route annotations. The various ways of configuring headers can present challenges when working together.
You can only set or delete headers within an IngressController
or Route
CR, you cannot append them. If an HTTP header is set with a value, that value must be complete and not require appending in the future. In situations where it makes sense to append a header, such as the X-Forwarded-For header, use the spec.httpHeaders.forwardedHeaderPolicy
field, instead of spec.httpHeaders.actions
.
2.3.9.12.1. Order of precedence
When the same HTTP header is modified both in the Ingress Controller and in a route, HAProxy prioritizes the actions in certain ways depending on whether it is a request or response header.
- For HTTP response headers, actions specified in the Ingress Controller are executed after the actions specified in a route. This means that the actions specified in the Ingress Controller take precedence.
- For HTTP request headers, actions specified in a route are executed after the actions specified in the Ingress Controller. This means that the actions specified in the route take precedence.
For example, a cluster administrator sets the X-Frame-Options response header with the value DENY
in the Ingress Controller using the following configuration:
Example IngressController
spec
apiVersion: operator.openshift.io/v1 kind: IngressController # ... spec: httpHeaders: actions: response: - name: X-Frame-Options action: type: Set set: value: DENY
apiVersion: operator.openshift.io/v1
kind: IngressController
# ...
spec:
httpHeaders:
actions:
response:
- name: X-Frame-Options
action:
type: Set
set:
value: DENY
A route owner sets the same response header that the cluster administrator set in the Ingress Controller, but with the value SAMEORIGIN
using the following configuration:
Example Route
spec
apiVersion: route.openshift.io/v1 kind: Route # ... spec: httpHeaders: actions: response: - name: X-Frame-Options action: type: Set set: value: SAMEORIGIN
apiVersion: route.openshift.io/v1
kind: Route
# ...
spec:
httpHeaders:
actions:
response:
- name: X-Frame-Options
action:
type: Set
set:
value: SAMEORIGIN
When both the IngressController
spec and Route
spec are configuring the X-Frame-Options response header, then the value set for this header at the global level in the Ingress Controller takes precedence, even if a specific route allows frames. For a request header, the Route
spec value overrides the IngressController
spec value.
This prioritization occurs because the haproxy.config
file uses the following logic, where the Ingress Controller is considered the front end and individual routes are considered the back end. The header value DENY
applied to the front end configurations overrides the same header with the value SAMEORIGIN
that is set in the back end:
frontend public http-response set-header X-Frame-Options 'DENY' frontend fe_sni http-response set-header X-Frame-Options 'DENY' frontend fe_no_sni http-response set-header X-Frame-Options 'DENY' backend be_secure:openshift-monitoring:alertmanager-main http-response set-header X-Frame-Options 'SAMEORIGIN'
frontend public
http-response set-header X-Frame-Options 'DENY'
frontend fe_sni
http-response set-header X-Frame-Options 'DENY'
frontend fe_no_sni
http-response set-header X-Frame-Options 'DENY'
backend be_secure:openshift-monitoring:alertmanager-main
http-response set-header X-Frame-Options 'SAMEORIGIN'
Additionally, any actions defined in either the Ingress Controller or a route override values set using route annotations.
2.3.9.12.2. Special case headers
The following headers are either prevented entirely from being set or deleted, or allowed under specific circumstances:
Header name | Configurable using IngressController spec | Configurable using Route spec | Reason for disallowment | Configurable using another method |
---|---|---|---|---|
| No | No |
The | No |
| No | Yes |
When the | No |
| No | No |
The |
Yes: the |
| No | No | The cookies that HAProxy sets are used for session tracking to map client connections to particular back-end servers. Allowing these headers to be set could interfere with HAProxy’s session affinity and restrict HAProxy’s ownership of a cookie. | Yes:
|
2.3.9.13. Setting or deleting HTTP request and response headers in an Ingress Controller
You can set or delete certain HTTP request and response headers for compliance purposes or other reasons. You can set or delete these headers either for all routes served by an Ingress Controller or for specific routes.
For example, you might want to migrate an application running on your cluster to use mutual TLS, which requires that your application checks for an X-Forwarded-Client-Cert request header, but the Red Hat OpenShift Service on AWS default Ingress Controller provides an X-SSL-Client-Der request header.
The following procedure modifies the Ingress Controller to set the X-Forwarded-Client-Cert request header, and delete the X-SSL-Client-Der request header.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). -
You have access to an Red Hat OpenShift Service on AWS cluster as a user with the
cluster-admin
role.
Procedure
Edit the Ingress Controller resource:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc -n openshift-ingress-operator edit ingresscontroller/default
$ oc -n openshift-ingress-operator edit ingresscontroller/default
Replace the X-SSL-Client-Der HTTP request header with the X-Forwarded-Client-Cert HTTP request header:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: actions: request: - name: X-Forwarded-Client-Cert action: type: Set set: value: "%{+Q}[ssl_c_der,base64]" - name: X-SSL-Client-Der action: type: Delete
apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: actions:
1 request:
2 - name: X-Forwarded-Client-Cert
3 action: type: Set
4 set: value: "%{+Q}[ssl_c_der,base64]"
5 - name: X-SSL-Client-Der action: type: Delete
- 1
- The list of actions you want to perform on the HTTP headers.
- 2
- The type of header you want to change. In this case, a request header.
- 3
- The name of the header you want to change. For a list of available headers you can set or delete, see HTTP header configuration.
- 4
- The type of action being taken on the header. This field can have the value
Set
orDelete
. - 5
- When setting HTTP headers, you must provide a
value
. The value can be a string from a list of available directives for that header, for exampleDENY
, or it can be a dynamic value that will be interpreted using HAProxy’s dynamic value syntax. In this case, a dynamic value is added.
NoteFor setting dynamic header values for HTTP responses, allowed sample fetchers are
res.hdr
andssl_c_der
. For setting dynamic header values for HTTP requests, allowed sample fetchers arereq.hdr
andssl_c_der
. Both request and response dynamic values can use thelower
andbase64
converters.- Save the file to apply the changes.
2.3.9.14. Using X-Forwarded headers
You configure the HAProxy Ingress Controller to specify a policy for how to handle HTTP headers including Forwarded
and X-Forwarded-For
. The Ingress Operator uses the HTTPHeaders
field to configure the ROUTER_SET_FORWARDED_HEADERS
environment variable of the Ingress Controller.
Procedure
Configure the
HTTPHeaders
field for the Ingress Controller.Use the following command to edit the
IngressController
resource:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc edit IngressController
$ oc edit IngressController
Under
spec
, set theHTTPHeaders
policy field toAppend
,Replace
,IfNone
, orNever
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: forwardedHeaderPolicy: Append
apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: forwardedHeaderPolicy: Append
Example use cases
As a cluster administrator, you can:
Configure an external proxy that injects the
X-Forwarded-For
header into each request before forwarding it to an Ingress Controller.To configure the Ingress Controller to pass the header through unmodified, you specify the
never
policy. The Ingress Controller then never sets the headers, and applications receive only the headers that the external proxy provides.Configure the Ingress Controller to pass the
X-Forwarded-For
header that your external proxy sets on external cluster requests through unmodified.To configure the Ingress Controller to set the
X-Forwarded-For
header on internal cluster requests, which do not go through the external proxy, specify theif-none
policy. If an HTTP request already has the header set through the external proxy, then the Ingress Controller preserves it. If the header is absent because the request did not come through the proxy, then the Ingress Controller adds the header.
As an application developer, you can:
Configure an application-specific external proxy that injects the
X-Forwarded-For
header.To configure an Ingress Controller to pass the header through unmodified for an application’s Route, without affecting the policy for other Routes, add an annotation
haproxy.router.openshift.io/set-forwarded-headers: if-none
orhaproxy.router.openshift.io/set-forwarded-headers: never
on the Route for the application.NoteYou can set the
haproxy.router.openshift.io/set-forwarded-headers
annotation on a per route basis, independent from the globally set value for the Ingress Controller.
2.3.9.15. Enable or disable HTTP/2 on Ingress Controllers
You can enable or disable transparent end-to-end HTTP/2 connectivity in HAProxy. Application owners can use HTTP/2 protocol capabilities, including single connection, header compression, binary streams, and more.
You can enable or disable HTTP/2 connectivity for an individual Ingress Controller or for the entire cluster.
If you enable or disable HTTP/2 connectivity for an individual Ingress Controller and for the entire cluster, the HTTP/2 configuration for the Ingress Controller takes precedence over the HTTP/2 configuration for the cluster.
To enable the use of HTTP/2 for a connection from the client to an HAProxy instance, a route must specify a custom certificate. A route that uses the default certificate cannot use HTTP/2. This restriction is necessary to avoid problems from connection coalescing, where the client re-uses a connection for different routes that use the same certificate.
Consider the following use cases for an HTTP/2 connection for each route type:
- For a re-encrypt route, the connection from HAProxy to the application pod can use HTTP/2 if the application supports using Application-Level Protocol Negotiation (ALPN) to negotiate HTTP/2 with HAProxy. You cannot use HTTP/2 with a re-encrypt route unless the Ingress Controller has HTTP/2 enabled.
- For a passthrough route, the connection can use HTTP/2 if the application supports using ALPN to negotiate HTTP/2 with the client. You can use HTTP/2 with a passthrough route if the Ingress Controller has HTTP/2 enabled or disabled.
-
For an edge-terminated secure route, the connection uses HTTP/2 if the service specifies only
appProtocol: kubernetes.io/h2c
. You can use HTTP/2 with an edge-terminated secure route if the Ingress Controller has HTTP/2 enabled or disabled. -
For an insecure route, the connection uses HTTP/2 if the service specifies only
appProtocol: kubernetes.io/h2c
. You can use HTTP/2 with an insecure route if the Ingress Controller has HTTP/2 enabled or disabled.
For non-passthrough routes, the Ingress Controller negotiates its connection to the application independently of the connection from the client. This means a client might connect to the Ingress Controller and negotiate HTTP/1.1. The Ingress Controller might then connect to the application, negotiate HTTP/2, and forward the request from the client HTTP/1.1 connection by using the HTTP/2 connection to the application.
This sequence of events causes an issue if the client subsequently tries to upgrade its connection from HTTP/1.1 to the WebSocket protocol. Consider that if you have an application that is intending to accept WebSocket connections, and the application attempts to allow for HTTP/2 protocol negotiation, the client fails any attempt to upgrade to the WebSocket protocol.
2.3.9.15.1. Enabling HTTP/2
You can enable HTTP/2 on a specific Ingress Controller, or you can enable HTTP/2 for the entire cluster.
Procedure
To enable HTTP/2 on a specific Ingress Controller, enter the
oc annotate
command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc -n openshift-ingress-operator annotate ingresscontrollers/<ingresscontroller_name> ingress.operator.openshift.io/default-enable-http2=true
$ oc -n openshift-ingress-operator annotate ingresscontrollers/<ingresscontroller_name> ingress.operator.openshift.io/default-enable-http2=true
1 - 1
- Replace
<ingresscontroller_name>
with the name of an Ingress Controller to enable HTTP/2.
To enable HTTP/2 for the entire cluster, enter the
oc annotate
command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=true
$ oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=true
Alternatively, you can apply the following YAML code to enable HTTP/2:
apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster annotations: ingress.operator.openshift.io/default-enable-http2: "true"
apiVersion: config.openshift.io/v1
kind: Ingress
metadata:
name: cluster
annotations:
ingress.operator.openshift.io/default-enable-http2: "true"
2.3.9.15.2. Disabling HTTP/2
You can disable HTTP/2 on a specific Ingress Controller, or you can disable HTTP/2 for the entire cluster.
Procedure
To disable HTTP/2 on a specific Ingress Controller, enter the
oc annotate
command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc -n openshift-ingress-operator annotate ingresscontrollers/<ingresscontroller_name> ingress.operator.openshift.io/default-enable-http2=false
$ oc -n openshift-ingress-operator annotate ingresscontrollers/<ingresscontroller_name> ingress.operator.openshift.io/default-enable-http2=false
1 - 1
- Replace
<ingresscontroller_name>
with the name of an Ingress Controller to disable HTTP/2.
To disable HTTP/2 for the entire cluster, enter the
oc annotate
command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=false
$ oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=false
Alternatively, you can apply the following YAML code to disable HTTP/2:
apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster annotations: ingress.operator.openshift.io/default-enable-http2: "false"
apiVersion: config.openshift.io/v1
kind: Ingress
metadata:
name: cluster
annotations:
ingress.operator.openshift.io/default-enable-http2: "false"
2.3.9.16. Configuring the PROXY protocol for an Ingress Controller
A cluster administrator can configure the PROXY protocol when an Ingress Controller uses either the HostNetwork
, NodePortService
, or Private
endpoint publishing strategy types. The PROXY protocol enables the load balancer to preserve the original client addresses for connections that the Ingress Controller receives. The original client addresses are useful for logging, filtering, and injecting HTTP headers. In the default configuration, the connections that the Ingress Controller receives only contain the source address that is associated with the load balancer.
The default Ingress Controller with installer-provisioned clusters on non-cloud platforms that use a Keepalived Ingress Virtual IP (VIP) do not support the PROXY protocol.
The PROXY protocol enables the load balancer to preserve the original client addresses for connections that the Ingress Controller receives. The original client addresses are useful for logging, filtering, and injecting HTTP headers. In the default configuration, the connections that the Ingress Controller receives contain only the source IP address that is associated with the load balancer.
For a passthrough route configuration, servers in Red Hat OpenShift Service on AWS clusters cannot observe the original client source IP address. If you need to know the original client source IP address, configure Ingress access logging for your Ingress Controller so that you can view the client source IP addresses.
For re-encrypt and edge routes, the Red Hat OpenShift Service on AWS router sets the Forwarded
and X-Forwarded-For
headers so that application workloads check the client source IP address.
For more information about Ingress access logging, see "Configuring Ingress access logging".
Configuring the PROXY protocol for an Ingress Controller is not supported when using the LoadBalancerService
endpoint publishing strategy type. This restriction is because when Red Hat OpenShift Service on AWS runs in a cloud platform, and an Ingress Controller specifies that a service load balancer should be used, the Ingress Operator configures the load balancer service and enables the PROXY protocol based on the platform requirement for preserving source addresses.
You must configure both Red Hat OpenShift Service on AWS and the external load balancer to use either the PROXY protocol or TCP.
This feature is not supported in cloud deployments. This restriction is because when Red Hat OpenShift Service on AWS runs in a cloud platform, and an Ingress Controller specifies that a service load balancer should be used, the Ingress Operator configures the load balancer service and enables the PROXY protocol based on the platform requirement for preserving source addresses.
You must configure both Red Hat OpenShift Service on AWS and the external load balancer to either use the PROXY protocol or to use Transmission Control Protocol (TCP).
Prerequisites
- You created an Ingress Controller.
Procedure
Edit the Ingress Controller resource by entering the following command in your CLI:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc -n openshift-ingress-operator edit ingresscontroller/default
$ oc -n openshift-ingress-operator edit ingresscontroller/default
Set the PROXY configuration:
If your Ingress Controller uses the
HostNetwork
endpoint publishing strategy type, set thespec.endpointPublishingStrategy.hostNetwork.protocol
subfield toPROXY
:Sample
hostNetwork
configuration toPROXY
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ... ...
# ... spec: endpointPublishingStrategy: hostNetwork: protocol: PROXY type: HostNetwork # ...
If your Ingress Controller uses the
NodePortService
endpoint publishing strategy type, set thespec.endpointPublishingStrategy.nodePort.protocol
subfield toPROXY
:Sample
nodePort
configuration toPROXY
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ... ...
# ... spec: endpointPublishingStrategy: nodePort: protocol: PROXY type: NodePortService # ...
If your Ingress Controller uses the
Private
endpoint publishing strategy type, set thespec.endpointPublishingStrategy.private.protocol
subfield toPROXY
:Sample
private
configuration toPROXY
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ... ...
# ... spec: endpointPublishingStrategy: private: protocol: PROXY type: Private # ...
Additional resources
2.3.9.17. Specifying an alternative cluster domain using the appsDomain option
As a cluster administrator, you can specify an alternative to the default cluster domain for user-created routes by configuring the appsDomain
field. The appsDomain
field is an optional domain for Red Hat OpenShift Service on AWS to use instead of the default, which is specified in the domain
field. If you specify an alternative domain, it overrides the default cluster domain for the purpose of determining the default host for a new route.
For example, you can use the DNS domain for your company as the default domain for routes and ingresses for applications running on your cluster.
Prerequisites
- You deployed an Red Hat OpenShift Service on AWS cluster.
-
You installed the
oc
command-line interface.
Procedure
Configure the
appsDomain
field by specifying an alternative default domain for user-created routes.Edit the ingress
cluster
resource:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc edit ingresses.config/cluster -o yaml
$ oc edit ingresses.config/cluster -o yaml
Edit the YAML file:
Sample
appsDomain
configuration totest.example.com
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: apps.example.com appsDomain: <test.example.com>
apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: apps.example.com
1 appsDomain: <test.example.com>
2
Verify that an existing route contains the domain name specified in the
appsDomain
field by exposing the route and verifying the route domain change:NoteWait for the
openshift-apiserver
finish rolling updates before exposing the route.Expose the route:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc expose service hello-openshift
$ oc expose service hello-openshift route.route.openshift.io/hello-openshift exposed
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get routes
$ oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD hello-openshift hello_openshift-<my_project>.test.example.com hello-openshift 8080-tcp None
2.3.9.18. Converting HTTP header case
HAProxy lowercases HTTP header names by default; for example, changing Host: xyz.com
to host: xyz.com
. If legacy applications are sensitive to the capitalization of HTTP header names, use the Ingress Controller spec.httpHeaders.headerNameCaseAdjustments
API field for a solution to accommodate legacy applications until they can be fixed.
Red Hat OpenShift Service on AWS includes HAProxy 2.8. If you want to update to this version of the web-based load balancer, ensure that you add the spec.httpHeaders.headerNameCaseAdjustments
section to your cluster’s configuration file.
As a cluster administrator, you can convert the HTTP header case by entering the oc patch
command or by setting the HeaderNameCaseAdjustments
field in the Ingress Controller YAML file.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). -
You have access to the cluster as a user with the
cluster-admin
role.
Procedure
Capitalize an HTTP header by using the
oc patch
command.Change the HTTP header from
host
toHost
by running the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{"spec":{"httpHeaders":{"headerNameCaseAdjustments":["Host"]}}}'
$ oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{"spec":{"httpHeaders":{"headerNameCaseAdjustments":["Host"]}}}'
Create a
Route
resource YAML file so that the annotation can be applied to the application.Example of a route named
my-application
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/h1-adjust-case: true name: <application_name> namespace: <application_name> # ...
apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/h1-adjust-case: true
1 name: <application_name> namespace: <application_name> # ...
- 1
- Set
haproxy.router.openshift.io/h1-adjust-case
so that the Ingress Controller can adjust thehost
request header as specified.
Specify adjustments by configuring the
HeaderNameCaseAdjustments
field in the Ingress Controller YAML configuration file.The following example Ingress Controller YAML file adjusts the
host
header toHost
for HTTP/1 requests to appropriately annotated routes:Example Ingress Controller YAML
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: headerNameCaseAdjustments: - Host
apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: headerNameCaseAdjustments: - Host
The following example route enables HTTP response header name case adjustments by using the
haproxy.router.openshift.io/h1-adjust-case
annotation:Example route YAML
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/h1-adjust-case: true name: my-application namespace: my-application spec: to: kind: Service name: my-application
apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/h1-adjust-case: true
1 name: my-application namespace: my-application spec: to: kind: Service name: my-application
- 1
- Set
haproxy.router.openshift.io/h1-adjust-case
to true.
2.3.9.19. Using router compression
You configure the HAProxy Ingress Controller to specify router compression globally for specific MIME types. You can use the mimeTypes
variable to define the formats of MIME types to which compression is applied. The types are: application, image, message, multipart, text, video, or a custom type prefaced by "X-". To see the full notation for MIME types and subtypes, see RFC1341.
Memory allocated for compression can affect the max connections. Additionally, compression of large buffers can cause latency, like heavy regex or long lists of regex.
Not all MIME types benefit from compression, but HAProxy still uses resources to try to compress if instructed to. Generally, text formats, such as html, css, and js, formats benefit from compression, but formats that are already compressed, such as image, audio, and video, benefit little in exchange for the time and resources spent on compression.
Procedure
Configure the
httpCompression
field for the Ingress Controller.Use the following command to edit the
IngressController
resource:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc edit -n openshift-ingress-operator ingresscontrollers/default
$ oc edit -n openshift-ingress-operator ingresscontrollers/default
Under
spec
, set thehttpCompression
policy field tomimeTypes
and specify a list of MIME types that should have compression applied:Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpCompression: mimeTypes: - "text/html" - "text/css; charset=utf-8" - "application/json" ...
apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpCompression: mimeTypes: - "text/html" - "text/css; charset=utf-8" - "application/json" ...
2.3.9.20. Exposing router metrics
You can expose the HAProxy router metrics by default in Prometheus format on the default stats port, 1936. The external metrics collection and aggregation systems such as Prometheus can access the HAProxy router metrics. You can view the HAProxy router metrics in a browser in the HTML and comma separated values (CSV) format.
Prerequisites
- You configured your firewall to access the default stats port, 1936.
Procedure
Get the router pod name by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get pods -n openshift-ingress
$ oc get pods -n openshift-ingress
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NAME READY STATUS RESTARTS AGE router-default-76bfffb66c-46qwp 1/1 Running 0 11h
NAME READY STATUS RESTARTS AGE router-default-76bfffb66c-46qwp 1/1 Running 0 11h
Get the router’s username and password, which the router pod stores in the
/var/lib/haproxy/conf/metrics-auth/statsUsername
and/var/lib/haproxy/conf/metrics-auth/statsPassword
files:Get the username by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc rsh <router_pod_name> cat metrics-auth/statsUsername
$ oc rsh <router_pod_name> cat metrics-auth/statsUsername
Get the password by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc rsh <router_pod_name> cat metrics-auth/statsPassword
$ oc rsh <router_pod_name> cat metrics-auth/statsPassword
Get the router IP and metrics certificates by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc describe pod <router_pod>
$ oc describe pod <router_pod>
Get the raw statistics in Prometheus format by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow curl -u <user>:<password> http://<router_IP>:<stats_port>/metrics
$ curl -u <user>:<password> http://<router_IP>:<stats_port>/metrics
Access the metrics securely by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow curl -u user:password https://<router_IP>:<stats_port>/metrics -k
$ curl -u user:password https://<router_IP>:<stats_port>/metrics -k
Access the default stats port, 1936, by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow curl -u <user>:<password> http://<router_IP>:<stats_port>/metrics
$ curl -u <user>:<password> http://<router_IP>:<stats_port>/metrics
Example 2.1. Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ... # HELP haproxy_backend_connections_total Total number of connections. # TYPE haproxy_backend_connections_total gauge haproxy_backend_connections_total{backend="http",namespace="default",route="hello-route"} 0 haproxy_backend_connections_total{backend="http",namespace="default",route="hello-route-alt"} 0 haproxy_backend_connections_total{backend="http",namespace="default",route="hello-route01"} 0 ... # HELP haproxy_exporter_server_threshold Number of servers tracked and the current threshold value. # TYPE haproxy_exporter_server_threshold gauge haproxy_exporter_server_threshold{type="current"} 11 haproxy_exporter_server_threshold{type="limit"} 500 ... # HELP haproxy_frontend_bytes_in_total Current total of incoming bytes. # TYPE haproxy_frontend_bytes_in_total gauge haproxy_frontend_bytes_in_total{frontend="fe_no_sni"} 0 haproxy_frontend_bytes_in_total{frontend="fe_sni"} 0 haproxy_frontend_bytes_in_total{frontend="public"} 119070 ... # HELP haproxy_server_bytes_in_total Current total of incoming bytes. # TYPE haproxy_server_bytes_in_total gauge haproxy_server_bytes_in_total{namespace="",pod="",route="",server="fe_no_sni",service=""} 0 haproxy_server_bytes_in_total{namespace="",pod="",route="",server="fe_sni",service=""} 0 haproxy_server_bytes_in_total{namespace="default",pod="docker-registry-5-nk5fz",route="docker-registry",server="10.130.0.89:5000",service="docker-registry"} 0 haproxy_server_bytes_in_total{namespace="default",pod="hello-rc-vkjqx",route="hello-route",server="10.130.0.90:8080",service="hello-svc-1"} 0 ...
... # HELP haproxy_backend_connections_total Total number of connections. # TYPE haproxy_backend_connections_total gauge haproxy_backend_connections_total{backend="http",namespace="default",route="hello-route"} 0 haproxy_backend_connections_total{backend="http",namespace="default",route="hello-route-alt"} 0 haproxy_backend_connections_total{backend="http",namespace="default",route="hello-route01"} 0 ... # HELP haproxy_exporter_server_threshold Number of servers tracked and the current threshold value. # TYPE haproxy_exporter_server_threshold gauge haproxy_exporter_server_threshold{type="current"} 11 haproxy_exporter_server_threshold{type="limit"} 500 ... # HELP haproxy_frontend_bytes_in_total Current total of incoming bytes. # TYPE haproxy_frontend_bytes_in_total gauge haproxy_frontend_bytes_in_total{frontend="fe_no_sni"} 0 haproxy_frontend_bytes_in_total{frontend="fe_sni"} 0 haproxy_frontend_bytes_in_total{frontend="public"} 119070 ... # HELP haproxy_server_bytes_in_total Current total of incoming bytes. # TYPE haproxy_server_bytes_in_total gauge haproxy_server_bytes_in_total{namespace="",pod="",route="",server="fe_no_sni",service=""} 0 haproxy_server_bytes_in_total{namespace="",pod="",route="",server="fe_sni",service=""} 0 haproxy_server_bytes_in_total{namespace="default",pod="docker-registry-5-nk5fz",route="docker-registry",server="10.130.0.89:5000",service="docker-registry"} 0 haproxy_server_bytes_in_total{namespace="default",pod="hello-rc-vkjqx",route="hello-route",server="10.130.0.90:8080",service="hello-svc-1"} 0 ...
Launch the stats window by entering the following URL in a browser:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow http://<user>:<password>@<router_IP>:<stats_port>
http://<user>:<password>@<router_IP>:<stats_port>
Optional: Get the stats in CSV format by entering the following URL in a browser:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow http://<user>:<password>@<router_ip>:1936/metrics;csv
http://<user>:<password>@<router_ip>:1936/metrics;csv
2.3.9.21. Customizing HAProxy error code response pages
As a cluster administrator, you can specify a custom error code response page for either 503, 404, or both error pages. The HAProxy router serves a 503 error page when the application pod is not running or a 404 error page when the requested URL does not exist. For example, if you customize the 503 error code response page, then the page is served when the application pod is not running, and the default 404 error code HTTP response page is served by the HAProxy router for an incorrect route or a non-existing route.
Custom error code response pages are specified in a config map then patched to the Ingress Controller. The config map keys have two available file names as follows: error-page-503.http
and error-page-404.http
.
Custom HTTP error code response pages must follow the HAProxy HTTP error page configuration guidelines. Here is an example of the default Red Hat OpenShift Service on AWS HAProxy router http 503 error code response page. You can use the default content as a template for creating your own custom page.
By default, the HAProxy router serves only a 503 error page when the application is not running or when the route is incorrect or non-existent. This default behavior is the same as the behavior on Red Hat OpenShift Service on AWS 4.8 and earlier. If a config map for the customization of an HTTP error code response is not provided, and you are using a custom HTTP error code response page, the router serves a default 404 or 503 error code response page.
If you use the Red Hat OpenShift Service on AWS default 503 error code page as a template for your customizations, the headers in the file require an editor that can use CRLF line endings.
Procedure
Create a config map named
my-custom-error-code-pages
in theopenshift-config
namespace:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc -n openshift-config create configmap my-custom-error-code-pages \ --from-file=error-page-503.http \ --from-file=error-page-404.http
$ oc -n openshift-config create configmap my-custom-error-code-pages \ --from-file=error-page-503.http \ --from-file=error-page-404.http
ImportantIf you do not specify the correct format for the custom error code response page, a router pod outage occurs. To resolve this outage, you must delete or correct the config map and delete the affected router pods so they can be recreated with the correct information.
Patch the Ingress Controller to reference the
my-custom-error-code-pages
config map by name:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"httpErrorCodePages":{"name":"my-custom-error-code-pages"}}}' --type=merge
$ oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"httpErrorCodePages":{"name":"my-custom-error-code-pages"}}}' --type=merge
The Ingress Operator copies the
my-custom-error-code-pages
config map from theopenshift-config
namespace to theopenshift-ingress
namespace. The Operator names the config map according to the pattern,<your_ingresscontroller_name>-errorpages
, in theopenshift-ingress
namespace.Display the copy:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get cm default-errorpages -n openshift-ingress
$ oc get cm default-errorpages -n openshift-ingress
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NAME DATA AGE default-errorpages 2 25s
NAME DATA AGE default-errorpages 2 25s
1 - 1
- The example config map name is
default-errorpages
because thedefault
Ingress Controller custom resource (CR) was patched.
Confirm that the config map containing the custom error response page mounts on the router volume where the config map key is the filename that has the custom HTTP error code response:
For 503 custom HTTP custom error code response:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc -n openshift-ingress rsh <router_pod> cat /var/lib/haproxy/conf/error_code_pages/error-page-503.http
$ oc -n openshift-ingress rsh <router_pod> cat /var/lib/haproxy/conf/error_code_pages/error-page-503.http
For 404 custom HTTP custom error code response:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc -n openshift-ingress rsh <router_pod> cat /var/lib/haproxy/conf/error_code_pages/error-page-404.http
$ oc -n openshift-ingress rsh <router_pod> cat /var/lib/haproxy/conf/error_code_pages/error-page-404.http
Verification
Verify your custom error code HTTP response:
Create a test project and application:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc new-project test-ingress
$ oc new-project test-ingress
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc new-app django-psql-example
$ oc new-app django-psql-example
For 503 custom http error code response:
- Stop all the pods for the application.
Run the following curl command or visit the route hostname in the browser:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow curl -vk <route_hostname>
$ curl -vk <route_hostname>
For 404 custom http error code response:
- Visit a non-existent route or an incorrect route.
Run the following curl command or visit the route hostname in the browser:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow curl -vk <route_hostname>
$ curl -vk <route_hostname>
Check if the
errorfile
attribute is properly in thehaproxy.config
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc -n openshift-ingress rsh <router> cat /var/lib/haproxy/conf/haproxy.config | grep errorfile
$ oc -n openshift-ingress rsh <router> cat /var/lib/haproxy/conf/haproxy.config | grep errorfile
2.3.9.22. Setting the Ingress Controller maximum connections
A cluster administrator can set the maximum number of simultaneous connections for OpenShift router deployments. You can patch an existing Ingress Controller to increase the maximum number of connections.
Prerequisites
- The following assumes that you already created an Ingress Controller
Procedure
Update the Ingress Controller to change the maximum number of connections for HAProxy:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{"spec":{"tuningOptions": {"maxConnections": 7500}}}'
$ oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{"spec":{"tuningOptions": {"maxConnections": 7500}}}'
WarningIf you set the
spec.tuningOptions.maxConnections
value greater than the current operating system limit, the HAProxy process will not start. See the table in the "Ingress Controller configuration parameters" section for more information about this parameter.
2.3.10. Red Hat OpenShift Service on AWS Ingress Operator configurations
The following table details the components of the Ingress Operator and if Red Hat Site Reliability Engineers (SRE) maintains this component on Red Hat OpenShift Service on AWS clusters.
Ingress component | Managed by | Default configuration? |
---|---|---|
Scaling Ingress Controller | SRE | Yes |
Ingress Operator thread count | SRE | Yes |
Ingress Controller access logging | SRE | Yes |
Ingress Controller sharding | SRE | Yes |
Ingress Controller route admission policy | SRE | Yes |
Ingress Controller wildcard routes | SRE | Yes |
Ingress Controller X-Forwarded headers | SRE | Yes |
Ingress Controller route compression | SRE | Yes |
2.4. Ingress Node Firewall Operator in Red Hat OpenShift Service on AWS
The Ingress Node Firewall Operator provides a stateless, eBPF-based firewall for managing node-level ingress traffic in Red Hat OpenShift Service on AWS.
2.4.1. Ingress Node Firewall Operator
The Ingress Node Firewall Operator provides ingress firewall rules at a node level by deploying the daemon set to nodes you specify and manage in the firewall configurations. To deploy the daemon set, you create an IngressNodeFirewallConfig
custom resource (CR). The Operator applies the IngressNodeFirewallConfig
CR to create ingress node firewall daemon set daemon
, which run on all nodes that match the nodeSelector
.
You configure rules
of the IngressNodeFirewall
CR and apply them to clusters using the nodeSelector
and setting values to "true".
The Ingress Node Firewall Operator supports only stateless firewall rules.
Network interface controllers (NICs) that do not support native XDP drivers will run at a lower performance.
For Red Hat OpenShift Service on AWS 4.14 or later, you must run Ingress Node Firewall Operator on RHEL 9.0 or later.
2.4.2. Installing the Ingress Node Firewall Operator
As a cluster administrator, you can install the Ingress Node Firewall Operator by using the Red Hat OpenShift Service on AWS CLI or the web console.
2.4.2.1. Installing the Ingress Node Firewall Operator using the CLI
As a cluster administrator, you can install the Operator using the CLI.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). - You have an account with administrator privileges.
Procedure
To create the
openshift-ingress-node-firewall
namespace, enter the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat << EOF| oc create -f - apiVersion: v1 kind: Namespace metadata: labels: pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/enforce-version: v1.24 name: openshift-ingress-node-firewall EOF
$ cat << EOF| oc create -f - apiVersion: v1 kind: Namespace metadata: labels: pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/enforce-version: v1.24 name: openshift-ingress-node-firewall EOF
To create an
OperatorGroup
CR, enter the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ingress-node-firewall-operators namespace: openshift-ingress-node-firewall EOF
$ cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ingress-node-firewall-operators namespace: openshift-ingress-node-firewall EOF
Subscribe to the Ingress Node Firewall Operator.
To create a
Subscription
CR for the Ingress Node Firewall Operator, enter the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ingress-node-firewall-sub namespace: openshift-ingress-node-firewall spec: name: ingress-node-firewall channel: stable source: redhat-operators sourceNamespace: openshift-marketplace EOF
$ cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ingress-node-firewall-sub namespace: openshift-ingress-node-firewall spec: name: ingress-node-firewall channel: stable source: redhat-operators sourceNamespace: openshift-marketplace EOF
To verify that the Operator is installed, enter the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get ip -n openshift-ingress-node-firewall
$ oc get ip -n openshift-ingress-node-firewall
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NAME CSV APPROVAL APPROVED install-5cvnz ingress-node-firewall.4.0-202211122336 Automatic true
NAME CSV APPROVAL APPROVED install-5cvnz ingress-node-firewall.4.0-202211122336 Automatic true
To verify the version of the Operator, enter the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get csv -n openshift-ingress-node-firewall
$ oc get csv -n openshift-ingress-node-firewall
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NAME DISPLAY VERSION REPLACES PHASE ingress-node-firewall.4.0-202211122336 Ingress Node Firewall Operator 4.0-202211122336 ingress-node-firewall.4.0-202211102047 Succeeded
NAME DISPLAY VERSION REPLACES PHASE ingress-node-firewall.4.0-202211122336 Ingress Node Firewall Operator 4.0-202211122336 ingress-node-firewall.4.0-202211102047 Succeeded
2.4.2.2. Installing the Ingress Node Firewall Operator using the web console
As a cluster administrator, you can install the Operator using the web console.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). - You have an account with administrator privileges.
Procedure
Install the Ingress Node Firewall Operator:
-
In the Red Hat OpenShift Service on AWS web console, click Operators
OperatorHub. - Select Ingress Node Firewall Operator from the list of available Operators, and then click Install.
- On the Install Operator page, under Installed Namespace, select Operator recommended Namespace.
- Click Install.
-
In the Red Hat OpenShift Service on AWS web console, click Operators
Verify that the Ingress Node Firewall Operator is installed successfully:
-
Navigate to the Operators
Installed Operators page. Ensure that Ingress Node Firewall Operator is listed in the openshift-ingress-node-firewall project with a Status of InstallSucceeded.
NoteDuring installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message.
If the Operator does not have a Status of InstallSucceeded, troubleshoot using the following steps:
- Inspect the Operator Subscriptions and Install Plans tabs for any failures or errors under Status.
-
Navigate to the Workloads
Pods page and check the logs for pods in the openshift-ingress-node-firewall
project. Check the namespace of the YAML file. If the annotation is missing, you can add the annotation
workload.openshift.io/allowed=management
to the Operator namespace with the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc annotate ns/openshift-ingress-node-firewall workload.openshift.io/allowed=management
$ oc annotate ns/openshift-ingress-node-firewall workload.openshift.io/allowed=management
NoteFor single-node OpenShift clusters, the
openshift-ingress-node-firewall
namespace requires theworkload.openshift.io/allowed=management
annotation.
-
Navigate to the Operators
2.4.3. Deploying Ingress Node Firewall Operator
Prerequisite
- The Ingress Node Firewall Operator is installed.
Procedure
To deploy the Ingress Node Firewall Operator, create a IngressNodeFirewallConfig
custom resource that will deploy the Operator’s daemon set. You can deploy one or multiple IngressNodeFirewall
CRDs to nodes by applying firewall rules.
-
Create the
IngressNodeFirewallConfig
inside theopenshift-ingress-node-firewall
namespace namedingressnodefirewallconfig
. Run the following command to deploy Ingress Node Firewall Operator rules:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc apply -f rule.yaml
$ oc apply -f rule.yaml
2.4.3.1. Ingress Node Firewall configuration object
The fields for the Ingress Node Firewall configuration object are described in the following table:
Field | Type | Description |
---|---|---|
|
|
The name of the CR object. The name of the firewall rules object must be |
|
|
Namespace for the Ingress Firewall Operator CR object. The |
|
| A node selection constraint used to target nodes through specified node labels. For example: spec: nodeSelector: node-role.kubernetes.io/worker: ""
Note
One label used in |
|
| Specifies if the Node Ingress Firewall Operator uses the eBPF Manager Operator or not to manage eBPF programs. This capability is a Technology Preview feature. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
The Operator consumes the CR and creates an ingress node firewall daemon set on all the nodes that match the nodeSelector
.
Ingress Node Firewall Operator example configuration
A complete Ingress Node Firewall Configuration is specified in the following example:
Example Ingress Node Firewall Configuration object
apiVersion: ingressnodefirewall.openshift.io/v1alpha1 kind: IngressNodeFirewallConfig metadata: name: ingressnodefirewallconfig namespace: openshift-ingress-node-firewall spec: nodeSelector: node-role.kubernetes.io/worker: ""
apiVersion: ingressnodefirewall.openshift.io/v1alpha1
kind: IngressNodeFirewallConfig
metadata:
name: ingressnodefirewallconfig
namespace: openshift-ingress-node-firewall
spec:
nodeSelector:
node-role.kubernetes.io/worker: ""
The Operator consumes the CR and creates an ingress node firewall daemon set on all the nodes that match the nodeSelector
.
2.4.3.2. Ingress Node Firewall rules object
The fields for the Ingress Node Firewall rules object are described in the following table:
Field | Type | Description |
---|---|---|
|
| The name of the CR object. |
|
|
The fields for this object specify the interfaces to apply the firewall rules to. For example, |
|
|
You can use |
|
|
|
Ingress object configuration
The values for the ingress
object are defined in the following table:
Field | Type | Description |
---|---|---|
|
| Allows you to set the CIDR block. You can configure multiple CIDRs from different address families. Note
Different CIDRs allow you to use the same order rule. In the case that there are multiple |
|
|
Ingress firewall
Set Note Ingress firewall rules are verified using a verification webhook that blocks any invalid configuration. The verification webhook prevents you from blocking any critical cluster services such as the API server. |
Ingress Node Firewall rules object example
A complete Ingress Node Firewall configuration is specified in the following example:
Example Ingress Node Firewall configuration
apiVersion: ingressnodefirewall.openshift.io/v1alpha1 kind: IngressNodeFirewall metadata: name: ingressnodefirewall spec: interfaces: - eth0 nodeSelector: matchLabels: <ingress_firewall_label_name>: <label_value> ingress: - sourceCIDRs: - 172.16.0.0/12 rules: - order: 10 protocolConfig: protocol: ICMP icmp: icmpType: 8 #ICMP Echo request action: Deny - order: 20 protocolConfig: protocol: TCP tcp: ports: "8000-9000" action: Deny - sourceCIDRs: - fc00:f853:ccd:e793::0/64 rules: - order: 10 protocolConfig: protocol: ICMPv6 icmpv6: icmpType: 128 #ICMPV6 Echo request action: Deny
apiVersion: ingressnodefirewall.openshift.io/v1alpha1
kind: IngressNodeFirewall
metadata:
name: ingressnodefirewall
spec:
interfaces:
- eth0
nodeSelector:
matchLabels:
<ingress_firewall_label_name>: <label_value>
ingress:
- sourceCIDRs:
- 172.16.0.0/12
rules:
- order: 10
protocolConfig:
protocol: ICMP
icmp:
icmpType: 8 #ICMP Echo request
action: Deny
- order: 20
protocolConfig:
protocol: TCP
tcp:
ports: "8000-9000"
action: Deny
- sourceCIDRs:
- fc00:f853:ccd:e793::0/64
rules:
- order: 10
protocolConfig:
protocol: ICMPv6
icmpv6:
icmpType: 128 #ICMPV6 Echo request
action: Deny
- 1
- A <label_name> and a <label_value> must exist on the node and must match the
nodeselector
label and value applied to the nodes you want theingressfirewallconfig
CR to run on. The <label_value> can betrue
orfalse
. By usingnodeSelector
labels, you can target separate groups of nodes to apply different rules to using theingressfirewallconfig
CR.
Zero trust Ingress Node Firewall rules object example
Zero trust Ingress Node Firewall rules can provide additional security to multi-interface clusters. For example, you can use zero trust Ingress Node Firewall rules to drop all traffic on a specific interface except for SSH.
A complete configuration of a zero trust Ingress Node Firewall rule set is specified in the following example:
Users need to add all ports their application will use to their allowlist in the following case to ensure proper functionality.
Example zero trust Ingress Node Firewall rules
apiVersion: ingressnodefirewall.openshift.io/v1alpha1 kind: IngressNodeFirewall metadata: name: ingressnodefirewall-zero-trust spec: interfaces: - eth1 nodeSelector: matchLabels: <ingress_firewall_label_name>: <label_value> ingress: - sourceCIDRs: - 0.0.0.0/0 rules: - order: 10 protocolConfig: protocol: TCP tcp: ports: 22 action: Allow - order: 20 action: Deny
apiVersion: ingressnodefirewall.openshift.io/v1alpha1
kind: IngressNodeFirewall
metadata:
name: ingressnodefirewall-zero-trust
spec:
interfaces:
- eth1
nodeSelector:
matchLabels:
<ingress_firewall_label_name>: <label_value>
ingress:
- sourceCIDRs:
- 0.0.0.0/0
rules:
- order: 10
protocolConfig:
protocol: TCP
tcp:
ports: 22
action: Allow
- order: 20
action: Deny
eBPF Manager Operator integration is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
2.4.4. Ingress Node Firewall Operator integration
The Ingress Node Firewall uses eBPF programs to implement some of its key firewall functionality. By default these eBPF programs are loaded into the kernel using a mechanism specific to the Ingress Node Firewall. You can configure the Ingress Node Firewall Operator to use the eBPF Manager Operator for loading and managing these programs instead.
When this integration is enabled, the following limitations apply:
- The Ingress Node Firewall Operator uses TCX if XDP is not available and TCX is incompatible with bpfman.
-
The Ingress Node Firewall Operator daemon set pods remain in the
ContainerCreating
state until the firewall rules are applied. - The Ingress Node Firewall Operator daemon set pods run as privileged.
2.4.5. Configuring Ingress Node Firewall Operator to use the eBPF Manager Operator
The Ingress Node Firewall uses eBPF programs to implement some of its key firewall functionality. By default these eBPF programs are loaded into the kernel using a mechanism specific to the Ingress Node Firewall.
As a cluster administrator, you can configure the Ingress Node Firewall Operator to use the eBPF Manager Operator for loading and managing these programs instead, adding additional security and observability functionality.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). - You have an account with administrator privileges.
- You installed the Ingress Node Firewall Operator.
- You have installed the eBPF Manager Operator.
Procedure
Apply the following labels to the
ingress-node-firewall-system
namespace:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc label namespace openshift-ingress-node-firewall \ pod-security.kubernetes.io/enforce=privileged \ pod-security.kubernetes.io/warn=privileged --overwrite
$ oc label namespace openshift-ingress-node-firewall \ pod-security.kubernetes.io/enforce=privileged \ pod-security.kubernetes.io/warn=privileged --overwrite
Edit the
IngressNodeFirewallConfig
object namedingressnodefirewallconfig
and set theebpfProgramManagerMode
field:Ingress Node Firewall Operator configuration object
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: ingressnodefirewall.openshift.io/v1alpha1 kind: IngressNodeFirewallConfig metadata: name: ingressnodefirewallconfig namespace: openshift-ingress-node-firewall spec: nodeSelector: node-role.kubernetes.io/worker: "" ebpfProgramManagerMode: <ebpf_mode>
apiVersion: ingressnodefirewall.openshift.io/v1alpha1 kind: IngressNodeFirewallConfig metadata: name: ingressnodefirewallconfig namespace: openshift-ingress-node-firewall spec: nodeSelector: node-role.kubernetes.io/worker: "" ebpfProgramManagerMode: <ebpf_mode>
where:
<ebpf_mode>
: Specifies whether or not the Ingress Node Firewall Operator uses the eBPF Manager Operator to manage eBPF programs. Must be eithertrue
orfalse
. If unset, eBPF Manager is not used.
2.4.6. Viewing Ingress Node Firewall Operator rules
Procedure
Run the following command to view all current rules :
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get ingressnodefirewall
$ oc get ingressnodefirewall
Choose one of the returned
<resource>
names and run the following command to view the rules or configs:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get <resource> <name> -o yaml
$ oc get <resource> <name> -o yaml
2.4.7. Troubleshooting the Ingress Node Firewall Operator
Run the following command to list installed Ingress Node Firewall custom resource definitions (CRD):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get crds | grep ingressnodefirewall
$ oc get crds | grep ingressnodefirewall
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NAME READY UP-TO-DATE AVAILABLE AGE ingressnodefirewallconfigs.ingressnodefirewall.openshift.io 2022-08-25T10:03:01Z ingressnodefirewallnodestates.ingressnodefirewall.openshift.io 2022-08-25T10:03:00Z ingressnodefirewalls.ingressnodefirewall.openshift.io 2022-08-25T10:03:00Z
NAME READY UP-TO-DATE AVAILABLE AGE ingressnodefirewallconfigs.ingressnodefirewall.openshift.io 2022-08-25T10:03:01Z ingressnodefirewallnodestates.ingressnodefirewall.openshift.io 2022-08-25T10:03:00Z ingressnodefirewalls.ingressnodefirewall.openshift.io 2022-08-25T10:03:00Z
Run the following command to view the state of the Ingress Node Firewall Operator:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get pods -n openshift-ingress-node-firewall
$ oc get pods -n openshift-ingress-node-firewall
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NAME READY STATUS RESTARTS AGE ingress-node-firewall-controller-manager 2/2 Running 0 5d21h ingress-node-firewall-daemon-pqx56 3/3 Running 0 5d21h
NAME READY STATUS RESTARTS AGE ingress-node-firewall-controller-manager 2/2 Running 0 5d21h ingress-node-firewall-daemon-pqx56 3/3 Running 0 5d21h
The following fields provide information about the status of the Operator:
READY
,STATUS
,AGE
, andRESTARTS
. TheSTATUS
field isRunning
when the Ingress Node Firewall Operator is deploying a daemon set to the assigned nodes.Run the following command to collect all ingress firewall node pods' logs:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc adm must-gather – gather_ingress_node_firewall
$ oc adm must-gather – gather_ingress_node_firewall
The logs are available in the sos node’s report containing eBPF
bpftool
outputs at/sos_commands/ebpf
. These reports include lookup tables used or updated as the ingress firewall XDP handles packet processing, updates statistics, and emits events.