このコンテンツは選択した言語では利用できません。

Networking Operators


Red Hat OpenShift Service on AWS 4

Managing networking-specific Operators in OpenShift Container Platform

Red Hat OpenShift Documentation Team

Abstract

This document covers the installation, configuration, and management of various networking-related Operators in OpenShift Container Platform.

Chapter 1. AWS Load Balancer Operator

The AWS Load Balancer Operator is an Operator supported by Red Hat that users can optionally install on SRE-managed Red Hat OpenShift Service on AWS clusters.

Important

Load Balancers created by the AWS Load Balancer Operator cannot be used for OpenShift Routes, and should only be used for individual services or ingress resources that do not need the full layer 7 capabilities of an OpenShift Route.

The AWS Load Balancer Operator is used to install, manage and configure the AWS Load Balancer Controller in a Red Hat OpenShift Service on AWS cluster.

The AWS Load Balancer Controller provisions AWS Application Load Balancers (ALB) when you create Kubernetes Ingress resources and AWS Network Load Balancers (NLB) when you create a Kubernetes Service resource with a type of LoadBalancer.

Compared with the default AWS in-tree load balancer provider, this controller is developed with advanced annotations for both ALBs and NLBs. Some advanced use cases are:

  • Using native Kubernetes Ingress objects with ALBs
  • Integrate ALBs with the AWS Web Application Firewall (WAF) service
  • Specify custom NLB source IP ranges
  • Specify custom NLB internal IP addresses

1.1. Preparing to install the AWS Load Balancer Operator

Before you install the AWS Load Balancer Operator, ensure that your cluster fulfills requirements and that your AWS VPC resources are appropriately tagged. You also have the option to configure some helpful environment variables.

1.1.1. Cluster requirements

Your cluster must be deployed across three availability zones, using a pre-existing VPC with three public subnets.

Important

These requirements mean that the AWS Load Balancer Operator may not be suitable for some PrivateLink clusters. AWS NLBs may be a better choice for such clusters.

1.1.2. Set up temporary environment variables

You have the option to set up temporary environment variables to hold resource identifiers and configuration details. Using temporary environment variables streamlines the process of running the installation commands for the AWS Load Balancer Operator.

If you do not want to use environment variables to store certain values, you can manually enter those values in the relevant installation commands.

Prerequisites

  • You have installed the AWS CLI (aws).
  • You have installed the OC CLI (oc).

Procedure

  1. Log in to your cluster as a cluster administrator using the OpenShift CLI (oc).

    $ oc login --token=<token> --server=<cluster_url>
    Copy to Clipboard Toggle word wrap
  2. Run the following commands to set up environment variables.

    $ export CLUSTER_NAME=$(oc get infrastructure cluster -o=jsonpath="{.status.apiServerURL}" | sed  's|^https://||' | awk -F . '{print $2}')
    $ export REGION=$(oc get infrastructure cluster -o=jsonpath="{.status.platformStatus.aws.region}")
    $ export OIDC_ENDPOINT=$(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed  's|^https://||')
    $ export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
    $ export SCRATCH="/tmp/${CLUSTER_NAME}/alb-operator"
    $ mkdir -p ${SCRATCH}
    Copy to Clipboard Toggle word wrap

    These commands create environment variables that you can use in this terminal session to pass their values to the command line interface.

  3. Verify that the variable values are set correctly by running the following command:

    $ echo "Cluster name: ${CLUSTER_NAME}
    Region: ${REGION}
    OIDC Endpoint: ${OIDC_ENDPOINT}
    AWS Account ID: ${AWS_ACCOUNT_ID}"
    Copy to Clipboard Toggle word wrap

    Example output

    Cluster name: <cluster_id>
    Region: <region>
    OIDC Endpoint: oidc.op1.openshiftapps.com/<oidc_id>
    AWS Account ID: <aws_id>
    Copy to Clipboard Toggle word wrap

Next steps

  • Use the same terminal session to continue with AWS Load Balancer Operator installation, to ensure that your environment variables are not lost.

1.1.3. Tag the AWS VPC and subnets

You must tag your AWS VPC resources before you install the AWS Load Balancer Operator.

Prerequisites

  • You have installed the AWS CLI (aws).
  • You have installed the OC CLI (oc).

Procedure

  1. Optional: Set up environment variables for AWS VPC resources.

    $ export VPC_ID=<vpc-id>
    $ export PUBLIC_SUBNET_IDS="<public-subnet-a-id> <public-subnet-b-id> <public-subnet-c-id>"
    $ export PRIVATE_SUBNET_IDS="<private-subnet-a-id> <private-subnet-b-id> <private-subnet-c-id>"
    Copy to Clipboard Toggle word wrap
  2. Tag your VPC to associate it with your cluster:

    $ aws ec2 create-tags --resources ${VPC_ID} --tags Key=kubernetes.io/cluster/${CLUSTER_NAME},Value=owned --region ${REGION}
    Copy to Clipboard Toggle word wrap
  3. Tag your public subnets to allow changes by elastic load balancing roles, and tag your private subnets to allow changes by internal elastic load balancing roles:

    cat <<EOF > "${SCRATCH}/tag-subnets.sh"
    #!/bin/bash
    
    aws ec2 create-tags \
         --resources ${PUBLIC_SUBNET_IDS} \
         --tags Key=kubernetes.io/role/elb,Value='' \
         --region ${REGION}
    
    aws ec2 create-tags \
         --resources ${PRIVATE_SUBNET_IDS} \
         --tags Key=kubernetes.io/role/internal-elb,Value='' \
         --region ${REGION}
    
    EOF
    Copy to Clipboard Toggle word wrap
  4. Run the script:

    bash ${SCRATCH}/tag-subnets.sh
    Copy to Clipboard Toggle word wrap

Additional resources

1.2. Installing the AWS Load Balancer Operator

You can install the AWS Load Balancer Operator using the OpenShift CLI (oc). Use the same terminal session you used in Setting up your environment to install the AWS Load Balancer Operator to make use of the environment variables.

Procedure

  1. Create a new project within your cluster for the AWS Load Balancer Operator:

    $ oc new-project aws-load-balancer-operator
    Copy to Clipboard Toggle word wrap
  2. Create an AWS IAM policy for the AWS Load Balancer Operator.

    1. Download the appropriate IAM policy:

      $ curl -o ${SCRATCH}/operator-permission-policy.json https://raw.githubusercontent.com/openshift/aws-load-balancer-operator/refs/heads/main/hack/operator-permission-policy.json
      Copy to Clipboard Toggle word wrap
    2. Create the permission policy for the Operator:

      $ aws iam create-policy \
              --policy-name aws-load-balancer-operator-policy \
              --policy-document file://${SCRATCH}/operator-permission-policy.json \
              --region ${REGION}
      Copy to Clipboard Toggle word wrap

      Take note of the Operator policy ARN in the output. This is referred to as the $OPERATOR_POLICY_ARN for the remainder of this process.

  3. Create an AWS IAM role for the AWS Load Balancer Operator:

    1. Create the trust policy for the Operator role:

      $ cat <<EOF > "${SCRATCH}/operator-trust-policy.json"
      {
       "Version": "2012-10-17",
       "Statement": [
       {
       "Effect": "Allow",
       "Condition": {
         "StringEquals" : {
           "${OIDC_ENDPOINT}:sub": ["system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-operator-controller-manager", "system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-controller-cluster"]
         }
       },
       "Principal": {
         "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_ENDPOINT}"
       },
       "Action": "sts:AssumeRoleWithWebIdentity"
       }
       ]
      }
      EOF
      Copy to Clipboard Toggle word wrap
    2. Create the Operator role using the trust policy:

      $ aws iam create-role --role-name "${CLUSTER_NAME}-alb-operator" \
          --assume-role-policy-document "file://${SCRATCH}/operator-trust-policy.json"
      Copy to Clipboard Toggle word wrap

      Take note of the Operator role ARN in the output. This is referred to as the $OPERATOR_ROLE_ARN for the remainder of this process.

    3. Associate the Operator role and policy:

      $ aws iam attach-role-policy --role-name "${CLUSTER_NAME}-alb-operator" \
          --policy-arn $OPERATOR_POLICY_ARN
      Copy to Clipboard Toggle word wrap
  4. Install the AWS Load Balancer Operator by creating an OperatorGroup and a Subscription:

    $ cat <<EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: aws-load-balancer-operator
      namespace: aws-load-balancer-operator
    spec:
      targetNamespaces: []
    ---
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: aws-load-balancer-operator
      namespace: aws-load-balancer-operator
    spec:
      channel: stable-v1
      name: aws-load-balancer-operator
      source: redhat-operators
      sourceNamespace: openshift-marketplace
      config:
        env:
        - name: ROLEARN
          value: "${OPERATOR_ROLE_ARN}"
    EOF
    Copy to Clipboard Toggle word wrap
  5. Create an AWS IAM policy for the AWS Load Balancer Controller.

    1. Download the appropriate IAM policy:

      $ curl -o ${SCRATCH}/controller-permission-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.12.0/docs/install/iam_policy.json
      Copy to Clipboard Toggle word wrap
    2. Create the permission policy for the Controller:

      $ aws iam create-policy \
          --region ${REGION} \
          --policy-name aws-load-balancer-controller-policy \
          --policy-document file://${SCRATCH}/controller-permission-policy.json
      Copy to Clipboard Toggle word wrap

      Take note of the Controller policy ARN in the output. This is referred to as the $CONTROLLER_POLICY_ARN for the remainder of this process.

  6. Create an AWS IAM role for the AWS Load Balancer Controller:

    1. Create the trust policy for the Controller role:

      $ cat <<EOF > ${SCRATCH}/controller-trust-policy.json
      {
        "Version": "2012-10-17",
        "Statement": [
          {
            "Effect": "Allow",
            "Principal": {
                "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_ENDPOINT}"
              },
              "Action": "sts:AssumeRoleWithWebIdentity",
              "Condition": {
                "StringEquals": {
                  "${OIDC_ENDPOINT}:sub": "system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-controller-cluster"
                  }
              }
            }
          ]
        }
      EOF
      Copy to Clipboard Toggle word wrap
    2. Create the Controller role using the trust policy:

      CONTROLLER_ROLE_ARN=$(aws iam create-role --role-name "${CLUSTER_NAME}-albo-controller" \ --assume-role-policy-document "file://${SCRATCH}/controller-trust-policy.json" \ --query Role.Arn --output text) echo ${CONTROLLER_ROLE_ARN}
      Copy to Clipboard Toggle word wrap

      Take note of the Controller role ARN in the output. This is referred to as the $CONTROLLER_ROLE_ARN for the remainder of this process.

    3. Associate the Controller role and policy:

      $ aws iam attach-role-policy \
          --role-name "${CLUSTER_NAME}-albo-controller" \
          --policy-arn ${CONTROLLER_POLICY_ARN}
      Copy to Clipboard Toggle word wrap
  7. Deploy an instance of the AWS Load Balancer Controller:

    $ cat << EOF | oc apply -f -
    apiVersion: networking.olm.openshift.io/v1
    kind: AWSLoadBalancerController
    metadata:
     name: cluster
    spec:
     credentialsRequestConfig:
       stsIAMRoleARN: ${CONTROLLER_ROLE_ARN}
    EOF
    Copy to Clipboard Toggle word wrap
    Note

    If you get an error here wait a minute and try again, it means the Operator has not completed installing yet.

  8. Confirm that the Operator and Controller pods are both running:

    $ oc -n aws-load-balancer-operator get pods
    Copy to Clipboard Toggle word wrap

    If you do not see output similar to the following, wait a few moments and retry.

    Example output

    NAME                                                             READY   STATUS    RESTARTS   AGE
    aws-load-balancer-controller-cluster-6ddf658785-pdp5d            1/1     Running   0          99s
    aws-load-balancer-operator-controller-manager-577d9ffcb9-w6zqn   2/2     Running   0          2m4s
    Copy to Clipboard Toggle word wrap

1.3. Validating Operator installation

Deploy a basic sample application and create ingress and load balancing services to confirm that the AWS Load Balancer Operator and Controller deployed correctly.

Procedure

  1. Create a new project:

    $ oc new-project hello-world
    Copy to Clipboard Toggle word wrap
  2. Create a new hello-world application based on the hello-openshift image:

    $ oc new-app -n hello-world --image=docker.io/openshift/hello-openshift
    Copy to Clipboard Toggle word wrap
  3. Configure a NodePort service for an AWS Application Load Balancer (ALB) to connect to:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Service
    metadata:
      name: hello-openshift-nodeport
      namespace: hello-world
    spec:
      ports:
        - port: 80
          targetPort: 8080
          protocol: TCP
      type: NodePort
      selector:
        deployment: hello-openshift
    EOF
    Copy to Clipboard Toggle word wrap
  4. Deploy an AWS ALB for the application:

    $ cat << EOF | oc apply -f -
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: hello-openshift-alb
      namespace: hello-world
      annotations:
        alb.ingress.kubernetes.io/scheme: internet-facing
    spec:
      ingressClassName: alb
      rules:
        - http:
            paths:
              - path: /
                pathType: Exact
                backend:
                  service:
                    name: hello-openshift-nodeport
                    port:
                      number: 80
    EOF
    Copy to Clipboard Toggle word wrap
  5. Test access to the AWS ALB endpoint for the application:

    Note

    ALB provisioning takes a few minutes. If you receive an error that says curl: (6) Could not resolve host, please wait and try again.

    $ ALB_INGRESS=$(oc -n hello-world get ingress hello-openshift-alb \
        -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
    $ curl "http://${ALB_INGRESS}"
    Copy to Clipboard Toggle word wrap

    Example output

    Hello OpenShift!
    Copy to Clipboard Toggle word wrap

  6. Deploy an AWS Network Load Balancer (NLB) for the application:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Service
    metadata:
      name: hello-openshift-nlb
      namespace: hello-world
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-type: external
        service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance
        service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
    spec:
      ports:
        - port: 80
          targetPort: 8080
          protocol: TCP
      type: LoadBalancer
      selector:
        deployment: hello-openshift
    EOF
    Copy to Clipboard Toggle word wrap
  7. Test access to the NLB endpoint for the application:

    Note

    NLB provisioning takes a few minutes. If you receive an error that says curl: (6) Could not resolve host, please wait and try again.

    $ NLB=$(oc -n hello-world get service hello-openshift-nlb \
      -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
    $ curl "http://${NLB}"
    Copy to Clipboard Toggle word wrap

    Example output

    Hello OpenShift!
    Copy to Clipboard Toggle word wrap

  8. You can now delete the sample application and all resources in the hello-world namespace.

    $ oc delete project hello-world
    Copy to Clipboard Toggle word wrap

1.4. Removing the AWS Load Balancer Operator

If you no longer need to use the AWS Load Balancer Operator, you can remove the Operator and delete any related roles and policies.

Procedure

  1. Delete the Operator Subscription:

    $ oc delete subscription aws-load-balancer-operator -n aws-load-balancer-operator
    Copy to Clipboard Toggle word wrap
  2. Detach and delete the relevant AWS IAM roles:

    $ aws iam detach-role-policy \
      --role-name "<cluster-id>-alb-operator" \
      --policy-arn <operator-policy-arn>
    $ aws iam delete-role \
      --role-name "<cluster-id>-alb-operator"
    Copy to Clipboard Toggle word wrap
  3. Delete the AWS IAM policy:

    $ aws iam delete-policy --policy-arn <operator-policy-arn>
    Copy to Clipboard Toggle word wrap

Chapter 2. DNS Operator in Red Hat OpenShift Service on AWS

In Red Hat OpenShift Service on AWS, the DNS Operator deploys and manages a CoreDNS instance to provide a name resolution service to pods inside the cluster, enables DNS-based Kubernetes Service discovery, and resolves internal cluster.local names.

This Operator is installed on Red Hat OpenShift Service on AWS clusters by default.

2.1. Using DNS forwarding

You can use DNS forwarding to override the default forwarding configuration in the /etc/resolv.conf file in the following ways:

  • Specify name servers (spec.servers) for every zone. If the forwarded zone is the ingress domain managed by Red Hat OpenShift Service on AWS, then the upstream name server must be authorized for the domain.

    Important

    You must specify at least one zone. Otherwise, your cluster can lose functionality.

  • Provide a list of upstream DNS servers (spec.upstreamResolvers).
  • Change the default forwarding policy.
Note

A DNS forwarding configuration for the default domain can have both the default servers specified in the /etc/resolv.conf file and the upstream DNS servers.

Procedure

  • Modify the DNS Operator object named default:

    $ oc edit dns.operator/default
    Copy to Clipboard Toggle word wrap

    After you issue the previous command, the Operator creates and updates the config map named dns-default with additional server configuration blocks based on spec.servers.

    Important

    When specifying values for the zones parameter, ensure that you only forward to specific zones, such as your intranet. You must specify at least one zone. Otherwise, your cluster can lose functionality.

    If none of the servers have a zone that matches the query, then name resolution falls back to the upstream DNS servers.

    Configuring DNS forwarding

    apiVersion: operator.openshift.io/v1
    kind: DNS
    metadata:
      name: default
    spec:
      cache:
        negativeTTL: 0s
        positiveTTL: 0s
      logLevel: Normal
      nodePlacement: {}
      operatorLogLevel: Normal
      servers:
      - name: example-server 
    1
    
        zones:
        - example.com 
    2
    
        forwardPlugin:
          policy: Random 
    3
    
          upstreams: 
    4
    
          - 1.1.1.1
          - 2.2.2.2:5353
      upstreamResolvers: 
    5
    
        policy: Random 
    6
    
        protocolStrategy: ""  
    7
    
        transportConfig: {}  
    8
    
        upstreams:
        - type: SystemResolvConf 
    9
    
        - type: Network
          address: 1.2.3.4 
    10
    
          port: 53 
    11
    
        status:
          clusterDomain: cluster.local
          clusterIP: x.y.z.10
          conditions:
    ...
    Copy to Clipboard Toggle word wrap

    1
    Must comply with the rfc6335 service name syntax.
    2
    Must conform to the definition of a subdomain in the rfc1123 service name syntax. The cluster domain, cluster.local, is an invalid subdomain for the zones field.
    3
    Defines the policy to select upstream resolvers listed in the forwardPlugin. Default value is Random. You can also use the values RoundRobin, and Sequential.
    4
    A maximum of 15 upstreams is allowed per forwardPlugin.
    5
    You can use upstreamResolvers to override the default forwarding policy and forward DNS resolution to the specified DNS resolvers (upstream resolvers) for the default domain. If you do not provide any upstream resolvers, the DNS name queries go to the servers declared in /etc/resolv.conf.
    6
    Determines the order in which upstream servers listed in upstreams are selected for querying. You can specify one of these values: Random, RoundRobin, or Sequential. The default value is Sequential.
    7
    When omitted, the platform chooses a default, normally the protocol of the original client request. Set to TCP to specify that the platform should use TCP for all upstream DNS requests, even if the client request uses UDP.
    8
    Used to configure the transport type, server name, and optional custom CA or CA bundle to use when forwarding DNS requests to an upstream resolver.
    9
    You can specify two types of upstreams: SystemResolvConf or Network. SystemResolvConf configures the upstream to use /etc/resolv.conf and Network defines a Networkresolver. You can specify one or both.
    10
    If the specified type is Network, you must provide an IP address. The address field must be a valid IPv4 or IPv6 address.
    11
    If the specified type is Network, you can optionally provide a port. The port field must have a value between 1 and 65535. If you do not specify a port for the upstream, the default port is 853.

Chapter 3. Ingress Operator in Red Hat OpenShift Service on AWS

The Ingress Operator implements the IngressController API and is the component responsible for enabling external access to Red Hat OpenShift Service on AWS cluster services.

This Operator is installed on Red Hat OpenShift Service on AWS clusters by default.

3.1. Red Hat OpenShift Service on AWS Ingress Operator

When you create your Red Hat OpenShift Service on AWS cluster, pods and services running on the cluster are each allocated their own IP addresses. The IP addresses are accessible to other pods and services running nearby but are not accessible to outside clients.

The Ingress Operator makes it possible for external clients to access your service by deploying and managing one or more HAProxy-based Ingress Controllers to handle routing.

Red Hat Site Reliability Engineers (SRE) manage the Ingress Operator for Red Hat OpenShift Service on AWS clusters. While you cannot alter the settings for the Ingress Operator, you may view the default Ingress Controller configurations, status, and logs as well as the Ingress Operator status.

3.2. View the default Ingress Controller

The Ingress Operator is a core feature of Red Hat OpenShift Service on AWS and is enabled out of the box.

Every new Red Hat OpenShift Service on AWS installation has an ingresscontroller named default. It can be supplemented with additional Ingress Controllers. If the default ingresscontroller is deleted, the Ingress Operator will automatically recreate it within a minute.

Procedure

  • View the default Ingress Controller:

    $ oc describe --namespace=openshift-ingress-operator ingresscontroller/default
    Copy to Clipboard Toggle word wrap

3.3. View Ingress Operator status

You can view and inspect the status of your Ingress Operator.

Procedure

  • View your Ingress Operator status:

    $ oc describe clusteroperators/ingress
    Copy to Clipboard Toggle word wrap

3.4. View Ingress Controller logs

You can view your Ingress Controller logs.

Procedure

  • View your Ingress Controller logs:

    $ oc logs --namespace=openshift-ingress-operator deployments/ingress-operator -c <container_name>
    Copy to Clipboard Toggle word wrap

3.5. View Ingress Controller status

Your can view the status of a particular Ingress Controller.

Procedure

  • View the status of an Ingress Controller:

    $ oc describe --namespace=openshift-ingress-operator ingresscontroller/<name>
    Copy to Clipboard Toggle word wrap

3.6. Management of default Ingress Controller functions

The following table details the components of the default Ingress Controller managed by the Ingress Operator and whether Red Hat Site Reliability Engineering (SRE) maintains this component on Red Hat OpenShift Service on AWS clusters.

Expand
Table 3.1. Ingress Operator Responsibility Chart
Ingress componentManaged byDefault configuration?

Scaling Ingress Controller

SRE

Yes

Ingress Operator thread count

SRE

Yes

Ingress Controller access logging

SRE

Yes

Ingress Controller sharding

SRE

Yes

Ingress Controller route admission policy

SRE

Yes

Ingress Controller wildcard routes

SRE

Yes

Ingress Controller X-Forwarded headers

SRE

Yes

Ingress Controller route compression

SRE

Yes

Chapter 4. Ingress Node Firewall Operator in Red Hat OpenShift Service on AWS

The Ingress Node Firewall Operator provides a stateless, eBPF-based firewall for managing node-level ingress traffic in Red Hat OpenShift Service on AWS.

4.1. Ingress Node Firewall Operator

The Ingress Node Firewall Operator provides ingress firewall rules at a node level by deploying the daemon set to nodes you specify and manage in the firewall configurations. To deploy the daemon set, you create an IngressNodeFirewallConfig custom resource (CR). The Operator applies the IngressNodeFirewallConfig CR to create ingress node firewall daemon set daemon, which run on all nodes that match the nodeSelector.

You configure rules of the IngressNodeFirewall CR and apply them to clusters using the nodeSelector and setting values to "true".

Important

The Ingress Node Firewall Operator supports only stateless firewall rules.

Network interface controllers (NICs) that do not support native XDP drivers will run at a lower performance.

You must run Ingress Node Firewall Operator on Red Hat OpenShift Service on AWS 4.14 or later or later.

4.2. Installing the Ingress Node Firewall Operator

As a cluster administrator, you can install the Ingress Node Firewall Operator by using the Red Hat OpenShift Service on AWS CLI or the web console.

4.2.1. Installing the Ingress Node Firewall Operator using the CLI

As a cluster administrator, you can install the Operator using the CLI.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have an account with administrator privileges.

Procedure

  1. To create the openshift-ingress-node-firewall namespace, enter the following command:

    $ cat << EOF| oc create -f -
    apiVersion: v1
    kind: Namespace
    metadata:
      labels:
        pod-security.kubernetes.io/enforce: privileged
        pod-security.kubernetes.io/enforce-version: v1.24
      name: openshift-ingress-node-firewall
    EOF
    Copy to Clipboard Toggle word wrap
  2. To create an OperatorGroup CR, enter the following command:

    $ cat << EOF| oc create -f -
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: ingress-node-firewall-operators
      namespace: openshift-ingress-node-firewall
    EOF
    Copy to Clipboard Toggle word wrap
  3. Subscribe to the Ingress Node Firewall Operator.

    • To create a Subscription CR for the Ingress Node Firewall Operator, enter the following command:

      $ cat << EOF| oc create -f -
      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: ingress-node-firewall-sub
        namespace: openshift-ingress-node-firewall
      spec:
        name: ingress-node-firewall
        channel: stable
        source: redhat-operators
        sourceNamespace: openshift-marketplace
      EOF
      Copy to Clipboard Toggle word wrap
  4. To verify that the Operator is installed, enter the following command:

    $ oc get ip -n openshift-ingress-node-firewall
    Copy to Clipboard Toggle word wrap

    Example output

    NAME            CSV                                         APPROVAL    APPROVED
    install-5cvnz   ingress-node-firewall.4.0-202211122336   Automatic   true
    Copy to Clipboard Toggle word wrap

  5. To verify the version of the Operator, enter the following command:

    $ oc get csv -n openshift-ingress-node-firewall
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                        DISPLAY                          VERSION               REPLACES                                    PHASE
    ingress-node-firewall.4.0-202211122336   Ingress Node Firewall Operator   4.0-202211122336   ingress-node-firewall.4.0-202211102047   Succeeded
    Copy to Clipboard Toggle word wrap

4.2.2. Installing the Ingress Node Firewall Operator using the web console

As a cluster administrator, you can install the Operator using the web console.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have an account with administrator privileges.

Procedure

  1. Install the Ingress Node Firewall Operator:

    1. In the Red Hat OpenShift Service on AWS web console, click OperatorsOperatorHub.
    2. Select Ingress Node Firewall Operator from the list of available Operators, and then click Install.
    3. On the Install Operator page, under Installed Namespace, select Operator recommended Namespace.
    4. Click Install.
  2. Verify that the Ingress Node Firewall Operator is installed successfully:

    1. Navigate to the OperatorsInstalled Operators page.
    2. Ensure that Ingress Node Firewall Operator is listed in the openshift-ingress-node-firewall project with a Status of InstallSucceeded.

      Note

      During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message.

      If the Operator does not have a Status of InstallSucceeded, troubleshoot using the following steps:

      • Inspect the Operator Subscriptions and Install Plans tabs for any failures or errors under Status.
      • Navigate to the WorkloadsPods page and check the logs for pods in the openshift-ingress-node-firewall project.
      • Check the namespace of the YAML file. If the annotation is missing, you can add the annotation workload.openshift.io/allowed=management to the Operator namespace with the following command:

        $ oc annotate ns/openshift-ingress-node-firewall workload.openshift.io/allowed=management
        Copy to Clipboard Toggle word wrap
        Note

        For single-node OpenShift clusters, the openshift-ingress-node-firewall namespace requires the workload.openshift.io/allowed=management annotation.

4.3. Deploying Ingress Node Firewall Operator

Prerequisite

  • The Ingress Node Firewall Operator is installed.

Procedure

To deploy the Ingress Node Firewall Operator, create a IngressNodeFirewallConfig custom resource that will deploy the Operator’s daemon set. You can deploy one or multiple IngressNodeFirewall CRDs to nodes by applying firewall rules.

  1. Create the IngressNodeFirewallConfig inside the openshift-ingress-node-firewall namespace named ingressnodefirewallconfig.
  2. Run the following command to deploy Ingress Node Firewall Operator rules:

    $ oc apply -f rule.yaml
    Copy to Clipboard Toggle word wrap

4.3.1. Ingress Node Firewall configuration object

The fields for the Ingress Node Firewall configuration object are described in the following table:

Expand
Table 4.1. Ingress Node Firewall Configuration object
FieldTypeDescription

metadata.name

string

The name of the CR object. The name of the firewall rules object must be ingressnodefirewallconfig.

metadata.namespace

string

Namespace for the Ingress Firewall Operator CR object. The IngressNodeFirewallConfig CR must be created inside the openshift-ingress-node-firewall namespace.

spec.nodeSelector

string

A node selection constraint used to target nodes through specified node labels. For example:

spec:
  nodeSelector:
    node-role.kubernetes.io/worker: ""
Copy to Clipboard Toggle word wrap
Note

One label used in nodeSelector must match a label on the nodes in order for the daemon set to start. For example, if the node labels node-role.kubernetes.io/worker and node-type.kubernetes.io/vm are applied to a node, then at least one label must be set using nodeSelector for the daemon set to start.

spec.ebpfProgramManagerMode

boolean

Specifies if the Node Ingress Firewall Operator uses the eBPF Manager Operator or not to manage eBPF programs. This capability is a Technology Preview feature.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Note

To start, the Operator consumes an IngressNodeFirewallConfig in order to generate the daemonset on all nodes. After this is created, additional firewall rule objects can be created.

4.3.2. Ingress Node Firewall Operator example configuration

A complete Ingress Node Firewall Configuration is specified in the following example:

Example of how to create an Ingress Node Firewall Configuration object

$ cat << EOF | oc create -f -
apiVersion: ingressnodefirewall.openshift.io/v1alpha1
kind: IngressNodeFirewallConfig
metadata:
  name: ingressnodefirewallconfig
  namespace: openshift-ingress-node-firewall
spec:
  nodeSelector:
    node-role.kubernetes.io/worker: ""
EOF
Copy to Clipboard Toggle word wrap

Note

The Operator consumes the CR object and creates an ingress node firewall daemon set on all the nodes that match the nodeSelector.

4.3.3. Ingress Node Firewall rules object

The fields for the Ingress Node Firewall rules object are described in the following table:

Expand
Table 4.2. Ingress Node Firewall rules object
FieldTypeDescription

metadata.name

string

The name of the CR object.

interfaces

array

The fields for this object specify the interfaces to apply the firewall rules to. For example, - en0 and - en1.

nodeSelector

array

You can use nodeSelector to select the nodes to apply the firewall rules to. Set the value of your named nodeselector labels to true to apply the rule.

ingress

object

ingress allows you to configure the rules that allow outside access to the services on your cluster.

4.3.3.1. Ingress object configuration

The values for the ingress object are defined in the following table:

Expand
Table 4.3. ingress object
FieldTypeDescription

sourceCIDRs

array

Allows you to set the CIDR block. You can configure multiple CIDRs from different address families.

Note

Different CIDRs allow you to use the same order rule. In the case that there are multiple IngressNodeFirewall objects for the same nodes and interfaces with overlapping CIDRs, the order field will specify which rule is applied first. Rules are applied in ascending order.

rules

array

Ingress firewall rules.order objects are ordered starting at 1 for each source.CIDR with up to 100 rules per CIDR. Lower order rules are executed first.

rules.protocolConfig.protocol supports the following protocols: TCP, UDP, SCTP, ICMP and ICMPv6. ICMP and ICMPv6 rules can match against ICMP and ICMPv6 types or codes. TCP, UDP, and SCTP rules can match against a single destination port or a range of ports using <start : end-1> format.

Set rules.action to allow to apply the rule or deny to disallow the rule.

Note

Ingress firewall rules are verified using a verification webhook that blocks any invalid configuration. The verification webhook prevents you from blocking any critical cluster services such as the API server.

4.3.3.2. Ingress Node Firewall rules object example

A complete Ingress Node Firewall configuration is specified in the following example:

Example Ingress Node Firewall configuration

apiVersion: ingressnodefirewall.openshift.io/v1alpha1
kind: IngressNodeFirewall
metadata:
  name: ingressnodefirewall
spec:
  interfaces:
  - eth0
  nodeSelector:
    matchLabels:
      <ingress_firewall_label_name>: <label_value> 
1

  ingress:
  - sourceCIDRs:
       - 172.16.0.0/12
    rules:
    - order: 10
      protocolConfig:
        protocol: ICMP
        icmp:
          icmpType: 8 #ICMP Echo request
      action: Deny
    - order: 20
      protocolConfig:
        protocol: TCP
        tcp:
          ports: "8000-9000"
      action: Deny
  - sourceCIDRs:
       - fc00:f853:ccd:e793::0/64
    rules:
    - order: 10
      protocolConfig:
        protocol: ICMPv6
        icmpv6:
          icmpType: 128 #ICMPV6 Echo request
      action: Deny
Copy to Clipboard Toggle word wrap

1
A <label_name> and a <label_value> must exist on the node and must match the nodeselector label and value applied to the nodes you want the ingressfirewallconfig CR to run on. The <label_value> can be true or false. By using nodeSelector labels, you can target separate groups of nodes to apply different rules to using the ingressfirewallconfig CR.
4.3.3.3. Zero trust Ingress Node Firewall rules object example

Zero trust Ingress Node Firewall rules can provide additional security to multi-interface clusters. For example, you can use zero trust Ingress Node Firewall rules to drop all traffic on a specific interface except for SSH.

A complete configuration of a zero trust Ingress Node Firewall rule set is specified in the following example:

Important

Users need to add all ports their application will use to their allowlist in the following case to ensure proper functionality.

Example zero trust Ingress Node Firewall rules

apiVersion: ingressnodefirewall.openshift.io/v1alpha1
kind: IngressNodeFirewall
metadata:
 name: ingressnodefirewall-zero-trust
spec:
 interfaces:
 - eth1 
1

 nodeSelector:
   matchLabels:
     <ingress_firewall_label_name>: <label_value> 
2

 ingress:
 - sourceCIDRs:
      - 0.0.0.0/0 
3

   rules:
   - order: 10
     protocolConfig:
       protocol: TCP
       tcp:
         ports: 22
     action: Allow
   - order: 20
     action: Deny 
4
Copy to Clipboard Toggle word wrap

1
Network-interface cluster
2
The <label_name> and <label_value> needs to match the nodeSelector label and value applied to the specific nodes with which you wish to apply the ingressfirewallconfig CR.
3
0.0.0.0/0 set to match any CIDR
4
action set to Deny
Important

eBPF Manager Operator integration is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

4.4. Ingress Node Firewall Operator integration

The Ingress Node Firewall uses eBPF programs to implement some of its key firewall functionality. By default these eBPF programs are loaded into the kernel using a mechanism specific to the Ingress Node Firewall. You can configure the Ingress Node Firewall Operator to use the eBPF Manager Operator for loading and managing these programs instead.

When this integration is enabled, the following limitations apply:

  • The Ingress Node Firewall Operator uses TCX if XDP is not available and TCX is incompatible with bpfman.
  • The Ingress Node Firewall Operator daemon set pods remain in the ContainerCreating state until the firewall rules are applied.
  • The Ingress Node Firewall Operator daemon set pods run as privileged.

4.5. Configuring Ingress Node Firewall Operator to use the eBPF Manager Operator

The Ingress Node Firewall uses eBPF programs to implement some of its key firewall functionality. By default these eBPF programs are loaded into the kernel using a mechanism specific to the Ingress Node Firewall.

As a cluster administrator, you can configure the Ingress Node Firewall Operator to use the eBPF Manager Operator for loading and managing these programs instead, adding additional security and observability functionality.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have an account with administrator privileges.
  • You installed the Ingress Node Firewall Operator.
  • You have installed the eBPF Manager Operator.

Procedure

  1. Apply the following labels to the ingress-node-firewall-system namespace:

    $ oc label namespace openshift-ingress-node-firewall \
        pod-security.kubernetes.io/enforce=privileged \
        pod-security.kubernetes.io/warn=privileged --overwrite
    Copy to Clipboard Toggle word wrap
  2. Edit the IngressNodeFirewallConfig object named ingressnodefirewallconfig and set the ebpfProgramManagerMode field:

    Ingress Node Firewall Operator configuration object

    apiVersion: ingressnodefirewall.openshift.io/v1alpha1
    kind: IngressNodeFirewallConfig
    metadata:
      name: ingressnodefirewallconfig
      namespace: openshift-ingress-node-firewall
    spec:
      nodeSelector:
        node-role.kubernetes.io/worker: ""
      ebpfProgramManagerMode: <ebpf_mode>
    Copy to Clipboard Toggle word wrap

    where:

    <ebpf_mode>: Specifies whether or not the Ingress Node Firewall Operator uses the eBPF Manager Operator to manage eBPF programs. Must be either true or false. If unset, eBPF Manager is not used.

4.6. Viewing Ingress Node Firewall Operator rules

Procedure

  1. Run the following command to view all current rules :

    $ oc get ingressnodefirewall
    Copy to Clipboard Toggle word wrap
  2. Choose one of the returned <resource> names and run the following command to view the rules or configs:

    $ oc get <resource> <name> -o yaml
    Copy to Clipboard Toggle word wrap

4.7. Troubleshooting the Ingress Node Firewall Operator

  • Run the following command to list installed Ingress Node Firewall custom resource definitions (CRD):

    $ oc get crds | grep ingressnodefirewall
    Copy to Clipboard Toggle word wrap

    Example output

    NAME               READY   UP-TO-DATE   AVAILABLE   AGE
    ingressnodefirewallconfigs.ingressnodefirewall.openshift.io       2022-08-25T10:03:01Z
    ingressnodefirewallnodestates.ingressnodefirewall.openshift.io    2022-08-25T10:03:00Z
    ingressnodefirewalls.ingressnodefirewall.openshift.io             2022-08-25T10:03:00Z
    Copy to Clipboard Toggle word wrap

  • Run the following command to view the state of the Ingress Node Firewall Operator:

    $ oc get pods -n openshift-ingress-node-firewall
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                       READY  STATUS         RESTARTS  AGE
    ingress-node-firewall-controller-manager   2/2    Running        0         5d21h
    ingress-node-firewall-daemon-pqx56         3/3    Running        0         5d21h
    Copy to Clipboard Toggle word wrap

    The following fields provide information about the status of the Operator: READY, STATUS, AGE, and RESTARTS. The STATUS field is Running when the Ingress Node Firewall Operator is deploying a daemon set to the assigned nodes.

  • Run the following command to collect all ingress firewall node pods' logs:

    $ oc adm must-gather – gather_ingress_node_firewall
    Copy to Clipboard Toggle word wrap

    The logs are available in the sos node’s report containing eBPF bpftool outputs at /sos_commands/ebpf. These reports include lookup tables used or updated as the ingress firewall XDP handles packet processing, updates statistics, and emits events.

Legal Notice

Copyright © 2025 Red Hat

OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).

Modified versions must remove all Red Hat trademarks.

Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.

Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.

Linux® is the registered trademark of Linus Torvalds in the United States and other countries.

Java® is a registered trademark of Oracle and/or its affiliates.

XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.

MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.

Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.

The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.

All other trademarks are the property of their respective owners.

トップに戻る
Red Hat logoGithubredditYoutubeTwitter

詳細情報

試用、購入および販売

コミュニティー

Red Hat ドキュメントについて

Red Hat をお使いのお客様が、信頼できるコンテンツが含まれている製品やサービスを活用することで、イノベーションを行い、目標を達成できるようにします。 最新の更新を見る.

多様性を受け入れるオープンソースの強化

Red Hat では、コード、ドキュメント、Web プロパティーにおける配慮に欠ける用語の置き換えに取り組んでいます。このような変更は、段階的に実施される予定です。詳細情報: Red Hat ブログ.

会社概要

Red Hat は、企業がコアとなるデータセンターからネットワークエッジに至るまで、各種プラットフォームや環境全体で作業を簡素化できるように、強化されたソリューションを提供しています。

Theme

© 2025 Red Hat