Ce contenu n'est pas disponible dans la langue sélectionnée.

Networking Operators


Red Hat OpenShift Service on AWS 4

Managing networking-specific Operators in Red Hat OpenShift Service on AWS

Red Hat OpenShift Documentation Team

Abstract

This document covers the installation, configuration, and management of various networking-related Operators in Red Hat OpenShift Service on AWS.

Chapter 1. AWS Load Balancer Operator

The AWS Load Balancer Operator is an Operator supported by Red Hat that you can optionally install on SRE-managed Red Hat OpenShift Service on AWS clusters.

Important

Load Balancers created by the AWS Load Balancer Operator cannot be used for OpenShift Routes, and should only be used for individual services or ingress resources that do not need the full layer 7 capabilities of an OpenShift Route.

The AWS Load Balancer Operator is used to install, manage, and configure the AWS Load Balancer Controller in a Red Hat OpenShift Service on AWS cluster.

The AWS Load Balancer Controller provisions AWS Application Load Balancers (ALB) when you create Kubernetes Ingress resources and AWS Network Load Balancers (NLB) when you create a Kubernetes Service resource with a type of LoadBalancer.

Compared with the default AWS in-tree load balancer provider, this controller is developed with advanced annotations for both ALBs and NLBs. Some advanced use cases are:

  • Using native Kubernetes Ingress objects with ALBs
  • Integrate ALBs with the AWS Web Application Firewall (WAF) service
  • Specify custom NLB source IP ranges
  • Specify custom NLB internal IP addresses

1.1. Prepare to install the AWS Load Balancer Operator

Before you install the AWS Load Balancer Operator, ensure that your cluster fulfills requirements and that your AWS VPC resources are appropriately tagged. You also have the option to configure some helpful environment variables.

Cluster requirements
Your cluster must be deployed across three availability zones and use a pre-existing VPC that has three public subnets.
Important

These requirements mean that the AWS Load Balancer Operator may not be suitable for some PrivateLink clusters. AWS NLBs may be a better choice for such clusters.

1.1.1. Set up temporary environment variables

You have the option to set up temporary environment variables to hold resource identifiers and configuration details. Using temporary environment variables streamlines the process of running the installation commands for the AWS Load Balancer Operator.

If you do not want to use environment variables to store certain values, you can manually enter those values in the relevant installation commands.

Prerequisites

  • You have installed the AWS CLI (aws).
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Log in to your cluster as a cluster administrator using the OpenShift CLI (oc).

    $ oc login --token=<token> --server=<cluster_url>
  2. Run the following commands to set up environment variables.

    $ export CLUSTER_NAME=$(oc get infrastructure cluster -o=jsonpath="{.status.apiServerURL}" | sed  's|^https://||' | awk -F . '{print $2}')
    $ export REGION=$(oc get infrastructure cluster -o=jsonpath="{.status.platformStatus.aws.region}")
    $ export OIDC_ENDPOINT=$(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed  's|^https://||')
    $ export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
    $ export SCRATCH="/tmp/${CLUSTER_NAME}/alb-operator"
    $ mkdir -p ${SCRATCH}

    These commands create environment variables that you can use in this terminal session to pass their values to the command line interface.

  3. Verify that the variable values are set correctly by running the following command:

    $ echo "Cluster name: ${CLUSTER_NAME}
    Region: ${REGION}
    OIDC Endpoint: ${OIDC_ENDPOINT}
    AWS Account ID: ${AWS_ACCOUNT_ID}"

    Example output

    Cluster name: <cluster_id>
    Region: <region>
    OIDC Endpoint: oidc.op1.openshiftapps.com/<oidc_id>
    AWS Account ID: <aws_id>

  4. Use the same terminal session to continue with AWS Load Balancer Operator installation, to ensure that your environment variables are not lost.

1.1.2. Tag the AWS VPC and subnets

To prepare your environment for the AWS Load Balancer Operator, tag your AWS VPC resources. This configuration ensures that the Operator can correctly identify and manage your network resources.

Prerequisites

  • You have installed the AWS CLI (aws).
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Optional: Set up environment variables for AWS VPC resources.

    $ export VPC_ID=<vpc-id>
    $ export PUBLIC_SUBNET_IDS="<public-subnet-a-id> <public-subnet-b-id> <public-subnet-c-id>"
    $ export PRIVATE_SUBNET_IDS="<private-subnet-a-id> <private-subnet-b-id> <private-subnet-c-id>"
  2. Tag your VPC to associate it with your cluster:

    $ aws ec2 create-tags --resources ${VPC_ID} --tags Key=kubernetes.io/cluster/${CLUSTER_NAME},Value=owned --region ${REGION}
  3. Tag your public subnets to allow changes by elastic load balancing roles, and tag your private subnets to allow changes by internal elastic load balancing roles:

    cat <<EOF > "${SCRATCH}/tag-subnets.sh"
    #!/bin/bash
    
    aws ec2 create-tags \
         --resources ${PUBLIC_SUBNET_IDS} \
         --tags Key=kubernetes.io/role/elb,Value='' \
         --region ${REGION}
    
    aws ec2 create-tags \
         --resources ${PRIVATE_SUBNET_IDS} \
         --tags Key=kubernetes.io/role/internal-elb,Value='' \
         --region ${REGION}
    
    EOF
  4. Run the script:

    bash ${SCRATCH}/tag-subnets.sh

1.2. Installing the AWS Load Balancer Operator

You can install the AWS Load Balancer Operator by using the OpenShift CLI (oc). Use the same terminal session you used in Setting up your environment to install the AWS Load Balancer Operator to make use of the environment variables.

Procedure

  1. Create a new project within your cluster for the AWS Load Balancer Operator:

    $ oc new-project aws-load-balancer-operator
  2. Create an AWS IAM policy for the AWS Load Balancer Operator.

    1. Download the appropriate IAM policy:

      $ curl -o ${SCRATCH}/operator-permission-policy.json https://raw.githubusercontent.com/openshift/aws-load-balancer-operator/refs/heads/main/hack/operator-permission-policy.json
    2. Create the permission policy for the Operator:

      $ aws iam create-policy \
              --policy-name aws-load-balancer-operator-policy \
              --policy-document file://${SCRATCH}/operator-permission-policy.json \
              --region ${REGION}

      Take note of the Operator policy ARN in the output. This is referred to as the $OPERATOR_POLICY_ARN for the remainder of this process.

  3. Create an AWS IAM role for the AWS Load Balancer Operator:

    1. Create the trust policy for the Operator role:

      $ cat <<EOF > "${SCRATCH}/operator-trust-policy.json"
      {
       "Version": "2012-10-17",
       "Statement": [
       {
       "Effect": "Allow",
       "Condition": {
         "StringEquals" : {
           "${OIDC_ENDPOINT}:sub": ["system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-operator-controller-manager", "system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-controller-cluster"]
         }
       },
       "Principal": {
         "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_ENDPOINT}"
       },
       "Action": "sts:AssumeRoleWithWebIdentity"
       }
       ]
      }
      EOF
    2. Create the Operator role using the trust policy:

      $ aws iam create-role --role-name "${CLUSTER_NAME}-alb-operator" \
          --assume-role-policy-document "file://${SCRATCH}/operator-trust-policy.json"

      Take note of the Operator role ARN in the output. This is referred to as the $OPERATOR_ROLE_ARN for the remainder of this process.

    3. Associate the Operator role and policy:

      $ aws iam attach-role-policy --role-name "${CLUSTER_NAME}-alb-operator" \
          --policy-arn $OPERATOR_POLICY_ARN
  4. Install the AWS Load Balancer Operator by creating an OperatorGroup and a Subscription:

    $ cat <<EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: aws-load-balancer-operator
      namespace: aws-load-balancer-operator
    spec:
      targetNamespaces: []
    ---
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: aws-load-balancer-operator
      namespace: aws-load-balancer-operator
    spec:
      channel: stable-v1
      name: aws-load-balancer-operator
      source: redhat-operators
      sourceNamespace: openshift-marketplace
      config:
        env:
        - name: ROLEARN
          value: "${OPERATOR_ROLE_ARN}"
    EOF
  5. Create an AWS IAM policy for the AWS Load Balancer Controller.

    1. Download the appropriate IAM policy:

      $ curl -o ${SCRATCH}/controller-permission-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.12.0/docs/install/iam_policy.json
    2. Create the permission policy for the Controller:

      $ aws iam create-policy \
          --region ${REGION} \
          --policy-name aws-load-balancer-controller-policy \
          --policy-document file://${SCRATCH}/controller-permission-policy.json

      Take note of the Controller policy ARN in the output. This is referred to as the $CONTROLLER_POLICY_ARN for the remainder of this process.

  6. Create an AWS IAM role for the AWS Load Balancer Controller:

    1. Create the trust policy for the Controller role:

      $ cat <<EOF > ${SCRATCH}/controller-trust-policy.json
      {
        "Version": "2012-10-17",
        "Statement": [
          {
            "Effect": "Allow",
            "Principal": {
                "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_ENDPOINT}"
              },
              "Action": "sts:AssumeRoleWithWebIdentity",
              "Condition": {
                "StringEquals": {
                  "${OIDC_ENDPOINT}:sub": "system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-controller-cluster"
                  }
              }
            }
          ]
        }
      EOF
    2. Create the Controller role using the trust policy:

      CONTROLLER_ROLE_ARN=$(aws iam create-role --role-name "${CLUSTER_NAME}-albo-controller" \ --assume-role-policy-document "file://${SCRATCH}/controller-trust-policy.json" \ --query Role.Arn --output text) echo ${CONTROLLER_ROLE_ARN}

      Take note of the Controller role ARN in the output. This is referred to as the $CONTROLLER_ROLE_ARN for the remainder of this process.

    3. Associate the Controller role and policy:

      $ aws iam attach-role-policy \
          --role-name "${CLUSTER_NAME}-albo-controller" \
          --policy-arn ${CONTROLLER_POLICY_ARN}
  7. Deploy an instance of the AWS Load Balancer Controller:

    $ cat << EOF | oc apply -f -
    apiVersion: networking.olm.openshift.io/v1
    kind: AWSLoadBalancerController
    metadata:
     name: cluster
    spec:
     credentialsRequestConfig:
       stsIAMRoleARN: ${CONTROLLER_ROLE_ARN}
    EOF
    Note

    If you get an error here, wait a minute and try again. This situation happens because the Operator has not completed installation yet.

  8. Confirm that the Operator and Controller pods are both running:

    $ oc -n aws-load-balancer-operator get pods

    If you do not see output similar to the following, wait a few moments and retry.

    Example output

    NAME                                                             READY   STATUS    RESTARTS   AGE
    aws-load-balancer-controller-cluster-6ddf658785-pdp5d            1/1     Running   0          99s
    aws-load-balancer-operator-controller-manager-577d9ffcb9-w6zqn   2/2     Running   0          2m4s

1.3. Validating Operator installation

To confirm that the AWS Load Balancer Operator and Controller are installed correctly, deploy a basic sample application. This validation process involves creating ingress and load balancing services to test the deployment.

Procedure

  1. Create a new project:

    $ oc new-project hello-world
  2. Create a new hello-world application based on the hello-openshift image:

    $ oc new-app -n hello-world --image=docker.io/openshift/hello-openshift
  3. Configure a NodePort service for an AWS Application Load Balancer (ALB) to connect to:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Service
    metadata:
      name: hello-openshift-nodeport
      namespace: hello-world
    spec:
      ports:
        - port: 80
          targetPort: 8080
          protocol: TCP
      type: NodePort
      selector:
        deployment: hello-openshift
    EOF
  4. Deploy an AWS ALB for the application:

    $ cat << EOF | oc apply -f -
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: hello-openshift-alb
      namespace: hello-world
      annotations:
        alb.ingress.kubernetes.io/scheme: internet-facing
    spec:
      ingressClassName: alb
      rules:
        - http:
            paths:
              - path: /
                pathType: Exact
                backend:
                  service:
                    name: hello-openshift-nodeport
                    port:
                      number: 80
    EOF
  5. Test access to the AWS ALB endpoint for the application:

    Note

    ALB provisioning takes a few minutes. If you receive an error that says curl: (6) Could not resolve host, please wait and try again.

    $ ALB_INGRESS=$(oc -n hello-world get ingress hello-openshift-alb \
        -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
    $ curl "http://${ALB_INGRESS}"

    Example output

    Hello OpenShift!

  6. Deploy an AWS Network Load Balancer (NLB) for the application:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Service
    metadata:
      name: hello-openshift-nlb
      namespace: hello-world
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-type: external
        service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance
        service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
    spec:
      ports:
        - port: 80
          targetPort: 8080
          protocol: TCP
      type: LoadBalancer
      selector:
        deployment: hello-openshift
    EOF
  7. Test access to the NLB endpoint for the application:

    Note

    NLB provisioning takes a few minutes. If you receive an error that says curl: (6) Could not resolve host, please wait and try again.

    $ NLB=$(oc -n hello-world get service hello-openshift-nlb \
      -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
    $ curl "http://${NLB}"

    Expected output shows Hello OpenShift!.

  8. You can now delete the sample application and all resources in the hello-world namespace.

    $ oc delete project hello-world

1.4. Removing the AWS Load Balancer Operator

If you no longer need to use the AWS Load Balancer Operator, you can remove the Operator and delete any related roles and policies.

Procedure

  1. Delete the Operator Subscription:

    $ oc delete subscription aws-load-balancer-operator -n aws-load-balancer-operator
  2. Detach and delete the relevant AWS IAM roles:

    $ aws iam detach-role-policy \
      --role-name "<cluster-id>-alb-operator" \
      --policy-arn <operator-policy-arn>
    $ aws iam delete-role \
      --role-name "<cluster-id>-alb-operator"
  3. Delete the AWS IAM policy:

    $ aws iam delete-policy --policy-arn <operator-policy-arn>

Chapter 2. DNS Operator in Red Hat OpenShift Service on AWS

In Red Hat OpenShift Service on AWS, the DNS Operator deploys and manages a CoreDNS instance to provide a name resolution service to pods inside the cluster, enables DNS-based Kubernetes Service discovery, and resolves internal cluster.local names.

This Operator is installed on Red Hat OpenShift Service on AWS clusters by default.

2.1. Using DNS forwarding

Configure DNS forwarding servers and upstream resolvers for the cluster.

You can use DNS forwarding to override the default forwarding configuration in the /etc/resolv.conf file in the following ways:

  • Specify name servers (spec.servers) for every zone. If the forwarded zone is the ingress domain managed by Red Hat OpenShift Service on AWS, then the upstream name server must be authorized for the domain.

    Important

    You must specify at least one zone. Otherwise, your cluster can lose functionality.

  • Provide a list of upstream DNS servers (spec.upstreamResolvers).
  • Change the default forwarding policy.

A DNS forwarding configuration for the default domain can have both the default servers specified in the /etc/resolv.conf file and the upstream DNS servers.

Important

During pod creation, Kubernetes uses the /etc/resolv.conf file that exists on a node. If you modify the /etc/resolv.conf file on a host node, the changes do not propagate to the /etc/resolv.conf file that exists in a container. You must re-create the container for changes to take effect.

Procedure

  • Modify the DNS Operator object named default:

    $ oc edit dns.operator/default

    After you issue the previous command, the Operator creates and updates the config map named dns-default with additional server configuration blocks based on spec.servers.

    Important

    When specifying values for the zones parameter, ensure that you only forward to specific zones, such as your intranet. You must specify at least one zone. Otherwise, your cluster can lose functionality.

    If none of the servers have a zone that matches the query, then name resolution falls back to the upstream DNS servers.

    Configuring DNS forwarding

    apiVersion: operator.openshift.io/v1
    kind: DNS
    metadata:
      name: default
    spec:
      cache:
        negativeTTL: 0s
        positiveTTL: 0s
      logLevel: Normal
      nodePlacement: {}
      operatorLogLevel: Normal
      servers:
      - name: example-server
        zones:
        - example.com
        forwardPlugin:
          policy: Random
          upstreams:
          - 1.1.1.1
          - 2.2.2.2:5353
      upstreamResolvers:
        policy: Random
        protocolStrategy: ""
        transportConfig: {}
        upstreams:
        - type: SystemResolvConf
        - type: Network
          address: 1.2.3.4
          port: 53
        status:
          clusterDomain: cluster.local
          clusterIP: x.y.z.10
          conditions:
    ...

    where:

    spec.servers.name
    Must comply with the rfc6335 service name syntax.
    spec.servers.zones
    Must conform to the rfc1123 subdomain syntax. The cluster domain cluster.local is invalid for zones.
    spec.servers.forwardPlugin.policy
    Specifies the upstream selection policy. Defaults to Random; allowed values are RoundRobin and Sequential.
    spec.servers.forwardPlugin.upstreams
    Must provide no more than 15 upstreams entries per forwardPlugin.
    spec.upstreamResolvers.upstreams
    Specifies an upstreamResolvers to override the default forwarding policy and forward DNS resolution to the specified DNS resolvers (upstream resolvers) for the default domain. You can use this field when you need custom upstream resolvers; otherwise queries use the servers declared in /etc/resolv.conf.
    spec.upstreamResolvers.policy
    Specifies the upstream selection order. Defaults to Sequential; allowed values are Random, RoundRobin, and Sequential.
    spec.upstreamResolvers.protocolStrategy
    Specify TCP to force the protocol to use for upstream DNS requests, even if the request uses UDP. Valid values are TCP and omitted. When omitted, the platform chooses a default, normally the protocol of the original client request.
    spec.upstreamResolvers.transportConfig
    Specifies the transport type, server name, and optional custom CA or CA bundle to use when forwarding DNS requests to an upstream resolver.
    spec.upstreamResolvers.upstreams.type
    Specifies two types of upstreams: SystemResolvConf or Network. SystemResolvConf configures the upstream to use /etc/resolv.conf and Network defines a Networkresolver. You can specify one or both.
    spec.upstreamResolvers.upstreams.address
    Specifies a valid IPv4 or IPv6 address when type is Network.
    spec.upstreamResolvers.upstreams.port
    Specifies an optional field to provide a port number. Valid values are between 1 and 65535; defaults to 853 when omitted.

The Ingress Operator implements the IngressController API and is the component responsible for enabling external access to Red Hat OpenShift Service on AWS cluster services.

This Operator is installed on Red Hat OpenShift Service on AWS clusters by default.

3.1. Red Hat OpenShift Service on AWS Ingress Operator

When you create your Red Hat OpenShift Service on AWS cluster, pods and services running on the cluster are each allocated their own IP addresses. The IP addresses are accessible to other pods and services running nearby but are not accessible to outside clients.

The Ingress Operator makes it possible for external clients to access your service by deploying and managing one or more HAProxy-based Ingress Controllers to handle routing.

Red Hat Site Reliability Engineers (SRE) manage the Ingress Operator for Red Hat OpenShift Service on AWS clusters. While you cannot alter the settings for the Ingress Operator, you may view the default Ingress Controller configurations, status, and logs as well as the Ingress Operator status.

3.2. View the default Ingress Controller

The Ingress Operator is a core feature of Red Hat OpenShift Service on AWS and is enabled out of the box.

Every new Red Hat OpenShift Service on AWS installation has an ingresscontroller named default. It can be supplemented with additional Ingress Controllers. If the default ingresscontroller is deleted, the Ingress Operator will automatically recreate it within a minute.

Procedure

  • View the default Ingress Controller:

    $ oc describe --namespace=openshift-ingress-operator ingresscontroller/default

3.3. View Ingress Operator status

You can view and inspect the status of your Ingress Operator.

Procedure

  • View your Ingress Operator status:

    $ oc describe clusteroperators/ingress

3.4. View Ingress Controller logs

You can view your Ingress Controller logs.

Procedure

  • View your Ingress Controller logs:

    $ oc logs --namespace=openshift-ingress-operator deployments/ingress-operator -c <container_name>

3.5. View Ingress Controller status

Your can view the status of a particular Ingress Controller.

Procedure

  • View the status of an Ingress Controller:

    $ oc describe --namespace=openshift-ingress-operator ingresscontroller/<name>

3.6. Management of default Ingress Controller functions

The following table details the components of the default Ingress Controller managed by the Ingress Operator and whether Red Hat Site Reliability Engineering (SRE) maintains this component on Red Hat OpenShift Service on AWS clusters.

Expand
Table 3.1. Ingress Operator Responsibility Chart
Ingress componentManaged byDefault configuration?

Scaling Ingress Controller

SRE

Yes

Ingress Operator thread count

SRE

Yes

Ingress Controller access logging

SRE

Yes

Ingress Controller sharding

SRE

Yes

Ingress Controller route admission policy

SRE

Yes

Ingress Controller wildcard routes

SRE

Yes

Ingress Controller X-Forwarded headers

SRE

Yes

Ingress Controller route compression

SRE

Yes

The Ingress Node Firewall Operator provides a stateless, eBPF-based firewall for managing node-level ingress traffic in Red Hat OpenShift Service on AWS.

4.1. Ingress Node Firewall Operator

The Ingress Node Firewall Operator provides ingress firewall rules at a node level that you can specify and manage in the firewall configurations.

To deploy the daemon set created by the Operator, you create an IngressNodeFirewallConfig custom resource (CR). The Operator applies the IngressNodeFirewallConfig CR to create ingress node firewall daemon set daemon, which run on all nodes that match the nodeSelector.

You configure rules of the IngressNodeFirewall CR and apply them to clusters using the nodeSelector and setting values to "true".

Important

The Ingress Node Firewall Operator supports only stateless firewall rules.

Network interface controllers (NICs) that do not support native XDP drivers will run at a lower performance.

You must run Ingress Node Firewall Operator on Red Hat OpenShift Service on AWS 4.14 or later or later.

4.2. Installing the Ingress Node Firewall Operator

As a cluster administrator, you can install the Ingress Node Firewall Operator to enable node-level ingress firewalling by using the Red Hat OpenShift Service on AWS CLI.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have an account with administrator privileges.

Procedure

  1. To create the openshift-ingress-node-firewall namespace, enter the following command:

    $ cat << EOF| oc create -f -
    apiVersion: v1
    kind: Namespace
    metadata:
      labels:
        pod-security.kubernetes.io/enforce: privileged
        pod-security.kubernetes.io/enforce-version: v1.24
      name: openshift-ingress-node-firewall
    EOF
  2. To create an OperatorGroup CR, enter the following command:

    $ cat << EOF| oc create -f -
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: ingress-node-firewall-operators
      namespace: openshift-ingress-node-firewall
    EOF
  3. Subscribe to the Ingress Node Firewall Operator.

    • To create a Subscription CR for the Ingress Node Firewall Operator, enter the following command:

      $ cat << EOF| oc create -f -
      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: ingress-node-firewall-sub
        namespace: openshift-ingress-node-firewall
      spec:
        name: ingress-node-firewall
        channel: stable
        source: redhat-operators
        sourceNamespace: openshift-marketplace
      EOF
  4. To verify that the Operator is installed, enter the following command:

    $ oc get ip -n openshift-ingress-node-firewall

    Example output

    NAME            CSV                                         APPROVAL    APPROVED
    install-5cvnz   ingress-node-firewall.4.0-202211122336   Automatic   true

  5. To verify the version of the Operator, enter the following command:

    $ oc get csv -n openshift-ingress-node-firewall

    Example output

    NAME                                        DISPLAY                          VERSION               REPLACES                                    PHASE
    ingress-node-firewall.4.0-202211122336   Ingress Node Firewall Operator   4.0-202211122336   ingress-node-firewall.4.0-202211102047   Succeeded

As a cluster administrator, you can install the Ingress Node Firewall Operator to enable node-level ingress firewalling by using the web console.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have an account with administrator privileges.

Procedure

  1. Install the Ingress Node Firewall Operator:

    1. In the Red Hat OpenShift Service on AWS web console, click EcosystemSoftware Catalog.
    2. Select Ingress Node Firewall Operator from the list of available Operators, and then click Install.
    3. On the Install Operator page, under Installed Namespace, select Operator recommended Namespace.
    4. Click Install.
  2. Verify that the Ingress Node Firewall Operator is installed successfully:

    1. Navigate to the EcosystemInstalled Operators page.
    2. Ensure that Ingress Node Firewall Operator is listed in the openshift-ingress-node-firewall project with a Status of InstallSucceeded.

      Note

      During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message.

      If the Operator does not have a Status of InstallSucceeded, troubleshoot using the following steps:

      • Inspect the Operator Subscriptions and Install Plans tabs for any failures or errors under Status.
      • Navigate to the WorkloadsPods page and check the logs for pods in the openshift-ingress-node-firewall project.
      • Check the namespace of the YAML file. If the annotation is missing, you can add the annotation workload.openshift.io/allowed=management to the Operator namespace with the following command:

        $ oc annotate ns/openshift-ingress-node-firewall workload.openshift.io/allowed=management
        Note

        For single-node OpenShift clusters, the openshift-ingress-node-firewall namespace requires the workload.openshift.io/allowed=management annotation.

4.4. Deploying Ingress Node Firewall Operator

To deploy the Ingress Node Firewall Operator, create a IngressNodeFirewallConfig custom resource that will deploy the Operator’s daemon set. You can deploy one or multiple IngressNodeFirewall CRDs to nodes by applying firewall rules.

Prerequisite

  • The Ingress Node Firewall Operator is installed.

Procedure

  1. Create the IngressNodeFirewallConfig inside the openshift-ingress-node-firewall namespace named ingressnodefirewallconfig.
  2. Run the following command to deploy Ingress Node Firewall Operator rules:

    $ oc apply -f rule.yaml

4.5. Ingress Node Firewall configuration object

Review configuration fields so you can define how the Operator deploys the firewall.

The fields for the Ingress Node Firewall configuration object are described in the following table:

Expand
Table 4.1. Ingress Node Firewall Configuration object
FieldTypeDescription

metadata.name

string

The name of the CR object. The name of the firewall rules object must be ingressnodefirewallconfig.

metadata.namespace

string

Namespace for the Ingress Firewall Operator CR object. The IngressNodeFirewallConfig CR must be created inside the openshift-ingress-node-firewall namespace.

spec.nodeSelector

string

A node selection constraint used to target nodes through specified node labels. For example:

apiVersion: ingressnodefirewall.openshift.io/v1alpha1
kind: IngressNodeFirewallConfig
metadata:
  name: ingressnodefirewallconfig
  namespace: openshift-ingress-node-firewall
spec:
  nodeSelector:
    node-role.kubernetes.io/worker: ""
Note

One label used in nodeSelector must match a label on the nodes in order for the daemon set to start. For example, if the node labels node-role.kubernetes.io/worker and node-type.kubernetes.io/vm are applied to a node, then at least one label must be set using nodeSelector for the daemon set to start.

spec.ebpfProgramManagerMode

boolean

Specifies if the Node Ingress Firewall Operator uses the eBPF Manager Operator or not to manage eBPF programs. This capability is a Technology Preview feature.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Note

To start, the Operator consumes an IngressNodeFirewallConfig in order to generate the daemonset on all nodes. After this is created, additional firewall rule objects can be created.

4.5.1. Ingress Node Firewall Operator example configuration

A complete Ingress Node Firewall Configuration is specified in the following example:

Example of how to create an Ingress Node Firewall Configuration object

$ cat << EOF | oc create -f -
apiVersion: ingressnodefirewall.openshift.io/v1alpha1
kind: IngressNodeFirewallConfig
metadata:
  name: ingressnodefirewallconfig
  namespace: openshift-ingress-node-firewall
spec:
  nodeSelector:
    node-role.kubernetes.io/worker: ""
EOF

Note

The Operator consumes the CR object and creates an ingress node firewall daemon set on all the nodes that match the nodeSelector.

4.5.2. Ingress Node Firewall rules object

You can review rule fields and examples to define which ingress traffic is allowed or denied by using the Ingress Node Firewall rules object.

The fields for the Ingress Node Firewall rules object are described in the following table:

Expand
Table 4.2. Ingress Node Firewall rules object
FieldTypeDescription

metadata.name

string

The name of the CR object.

interfaces

array

The fields for this object specify the interfaces to apply the firewall rules to. For example, - en0 and - en1.

nodeSelector

array

You can use nodeSelector to select the nodes to apply the firewall rules to. Set the value of your named nodeselector labels to true to apply the rule.

ingress

object

ingress allows you to configure the rules that allow outside access to the services on your cluster.

4.5.2.1. Ingress object configuration

The values for the ingress object are defined in the following table:

Expand
Table 4.3. ingress object
FieldTypeDescription

sourceCIDRs

array

Allows you to set the CIDR block. You can configure multiple CIDRs from different address families.

Note

Different CIDRs allow you to use the same order rule. In the case that there are multiple IngressNodeFirewall objects for the same nodes and interfaces with overlapping CIDRs, the order field will specify which rule is applied first. Rules are applied in ascending order.

rules

array

Ingress firewall rules.order objects are ordered starting at 1 for each source.CIDR with up to 100 rules per CIDR. Lower order rules are executed first.

rules.protocolConfig.protocol supports the following protocols: TCP, UDP, SCTP, ICMP and ICMPv6. ICMP and ICMPv6 rules can match against ICMP and ICMPv6 types or codes. TCP, UDP, and SCTP rules can match against a single destination port or a range of ports using <start : end-1> format.

Set rules.action to allow to apply the rule or deny to disallow the rule.

Note

Ingress firewall rules are verified using a verification webhook that blocks any invalid configuration. The verification webhook prevents you from blocking any critical cluster services such as the API server.

4.5.2.2. Ingress Node Firewall rules object example

A complete Ingress Node Firewall configuration is specified in the following example:

Example Ingress Node Firewall configuration

apiVersion: ingressnodefirewall.openshift.io/v1alpha1
kind: IngressNodeFirewall
metadata:
  name: ingressnodefirewall
spec:
  interfaces:
  - eth0
  nodeSelector:
    matchLabels:
      <label_name>: <label_value>
  ingress:
  - sourceCIDRs:
       - 172.16.0.0/12
    rules:
    - order: 10
      protocolConfig:
        protocol: ICMP
        icmp:
          icmpType: 8 #ICMP Echo request
      action: Deny
    - order: 20
      protocolConfig:
        protocol: TCP
        tcp:
          ports: "8000-9000"
      action: Deny
  - sourceCIDRs:
       - fc00:f853:ccd:e793::0/64
    rules:
    - order: 10
      protocolConfig:
        protocol: ICMPv6
        icmpv6:
          icmpType: 128 #ICMPV6 Echo request
      action: Deny

+ A <label_name> and a <label_value> must exist on the node and must match the nodeselector label and value applied to the nodes you want the ingressfirewallconfig CR to run on. The <label_value> can be true or false. By using nodeSelector labels, you can target separate groups of nodes to apply different rules to using the ingressfirewallconfig CR.

4.5.2.3. Zero trust Ingress Node Firewall rules object example

Zero trust Ingress Node Firewall rules can provide additional security to multi-interface clusters. For example, you can use zero trust Ingress Node Firewall rules to drop all traffic on a specific interface except for SSH.

A complete configuration of a zero trust Ingress Node Firewall rule for a network-interface cluster is specified in the following example:

Important

Users need to add all ports their application will use to their allowlist in the following case to ensure proper functionality.

Example zero trust Ingress Node Firewall rules

apiVersion: ingressnodefirewall.openshift.io/v1alpha1
kind: IngressNodeFirewall
metadata:
 name: ingressnodefirewall-zero-trust
spec:
 interfaces:
 - eth1
 nodeSelector:
   matchLabels:
     <ingress_firewall_label_name>: <label_value>
 ingress:
 - sourceCIDRs:
      - 0.0.0.0/0
   rules:
   - order: 10
     protocolConfig:
       protocol: TCP
       tcp:
         ports: 22
     action: Allow
   - order: 20
     action: Deny

Important

eBPF Manager Operator integration is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

4.6. Ingress Node Firewall Operator integration

Learn when to use eBPF Manager to load and manage Ingress Node Firewall programs.

The Ingress Node Firewall uses eBPF programs to implement some of its key firewall functionality. By default these eBPF programs are loaded into the kernel using a mechanism specific to the Ingress Node Firewall. You can configure the Ingress Node Firewall Operator to use the eBPF Manager Operator for loading and managing these programs instead.

When this integration is enabled, the following limitations apply:

  • The Ingress Node Firewall Operator uses TCX if XDP is not available and TCX is incompatible with bpfman.
  • The Ingress Node Firewall Operator daemon set pods remain in the ContainerCreating state until the firewall rules are applied.
  • The Ingress Node Firewall Operator daemon set pods run as privileged.

Configure the Ingress Node Firewall to use eBPF Manager for program lifecycle control.

The Ingress Node Firewall uses eBPF programs to implement some of its key firewall functionality. By default these eBPF programs are loaded into the kernel using a mechanism specific to the Ingress Node Firewall.

As a cluster administrator, you can configure the Ingress Node Firewall Operator to use the eBPF Manager Operator for loading and managing these programs instead, adding additional security and observability functionality.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have an account with administrator privileges.
  • You installed the Ingress Node Firewall Operator.
  • You have installed the eBPF Manager Operator.

Procedure

  1. Apply the following labels to the ingress-node-firewall-system namespace:

    $ oc label namespace openshift-ingress-node-firewall \
        pod-security.kubernetes.io/enforce=privileged \
        pod-security.kubernetes.io/warn=privileged --overwrite
  2. Edit the IngressNodeFirewallConfig object named ingressnodefirewallconfig and set the ebpfProgramManagerMode field:

    Ingress Node Firewall Operator configuration object

    apiVersion: ingressnodefirewall.openshift.io/v1alpha1
    kind: IngressNodeFirewallConfig
    metadata:
      name: ingressnodefirewallconfig
      namespace: openshift-ingress-node-firewall
    spec:
      nodeSelector:
        node-role.kubernetes.io/worker: ""
      ebpfProgramManagerMode: <ebpf_mode>

    where:

    <ebpf_mode>: Specifies whether or not the Ingress Node Firewall Operator uses the eBPF Manager Operator to manage eBPF programs. Must be either true or false. If unset, eBPF Manager is not used.

4.8. Viewing Ingress Node Firewall Operator rules

Inspect existing rules and configs to confirm the firewall is applied as intended.

Procedure

  1. Run the following command to view all current rules :

    $ oc get ingressnodefirewall
  2. Choose one of the returned <resource> names and run the following command to view the rules or configs:

    $ oc get <resource> <name> -o yaml

4.9. Troubleshooting the Ingress Node Firewall Operator

You can verify the status and view the logs to diagnose ingress firewall deployment or rule issues.

Procedure

  • Run the following command to list installed Ingress Node Firewall custom resource definitions (CRD):

    $ oc get crds | grep ingressnodefirewall

    Example output

    NAME               READY   UP-TO-DATE   AVAILABLE   AGE
    ingressnodefirewallconfigs.ingressnodefirewall.openshift.io       2022-08-25T10:03:01Z
    ingressnodefirewallnodestates.ingressnodefirewall.openshift.io    2022-08-25T10:03:00Z
    ingressnodefirewalls.ingressnodefirewall.openshift.io             2022-08-25T10:03:00Z

  • Run the following command to view the state of the Ingress Node Firewall Operator:

    $ oc get pods -n openshift-ingress-node-firewall

    Example output

    NAME                                       READY  STATUS         RESTARTS  AGE
    ingress-node-firewall-controller-manager   2/2    Running        0         5d21h
    ingress-node-firewall-daemon-pqx56         3/3    Running        0         5d21h

    The following fields provide information about the status of the Operator: READY, STATUS, AGE, and RESTARTS. The STATUS field is Running when the Ingress Node Firewall Operator is deploying a daemon set to the assigned nodes.

  • Run the following command to collect all ingress firewall node pods' logs:

    $ oc adm must-gather – gather_ingress_node_firewall

    The logs are available in the sos node’s report containing eBPF bpftool outputs at /sos_commands/ebpf. These reports include lookup tables used or updated as the ingress firewall XDP handles packet processing, updates statistics, and emits events.

Legal Notice

Copyright © Red Hat

OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).

Modified versions must remove all Red Hat trademarks.

Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.

Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.

Linux® is the registered trademark of Linus Torvalds in the United States and other countries.

Java® is a registered trademark of Oracle and/or its affiliates.

XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.

MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.

Node.js® is an official trademark of the OpenJS Foundation.

The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.

All other trademarks are the property of their respective owners.

Red Hat logoGithubredditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance. Découvrez nos récentes mises à jour.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez le Blog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

Theme

© 2026 Red Hat
Retour au début