このコンテンツは選択した言語では利用できません。
Networking Operators
Managing networking-specific Operators in OpenShift Container Platform
Abstract
Chapter 1. AWS Load Balancer Operator リンクのコピーリンクがクリップボードにコピーされました!
The AWS Load Balancer Operator is an Operator supported by Red Hat that users can optionally install on SRE-managed Red Hat OpenShift Service on AWS clusters.
Load Balancers created by the AWS Load Balancer Operator cannot be used for OpenShift Routes, and should only be used for individual services or ingress resources that do not need the full layer 7 capabilities of an OpenShift Route.
The AWS Load Balancer Operator is used to install, manage and configure the AWS Load Balancer Controller in a Red Hat OpenShift Service on AWS cluster.
The AWS Load Balancer Controller provisions AWS Application Load Balancers (ALB) when you create Kubernetes Ingress resources and AWS Network Load Balancers (NLB) when you create a Kubernetes Service resource with a type of LoadBalancer.
Compared with the default AWS in-tree load balancer provider, this controller is developed with advanced annotations for both ALBs and NLBs. Some advanced use cases are:
- Using native Kubernetes Ingress objects with ALBs
- Integrate ALBs with the AWS Web Application Firewall (WAF) service
- Specify custom NLB source IP ranges
- Specify custom NLB internal IP addresses
1.1. Preparing to install the AWS Load Balancer Operator リンクのコピーリンクがクリップボードにコピーされました!
Before you install the AWS Load Balancer Operator, ensure that your cluster fulfills requirements and that your AWS VPC resources are appropriately tagged. You also have the option to configure some helpful environment variables.
1.1.1. Cluster requirements リンクのコピーリンクがクリップボードにコピーされました!
Your cluster must be deployed across three availability zones, using a pre-existing VPC with three public subnets.
These requirements mean that the AWS Load Balancer Operator may not be suitable for some PrivateLink clusters. AWS NLBs may be a better choice for such clusters.
1.1.2. Set up temporary environment variables リンクのコピーリンクがクリップボードにコピーされました!
You have the option to set up temporary environment variables to hold resource identifiers and configuration details. Using temporary environment variables streamlines the process of running the installation commands for the AWS Load Balancer Operator.
If you do not want to use environment variables to store certain values, you can manually enter those values in the relevant installation commands.
Prerequisites
-
You have installed the AWS CLI (
aws
). -
You have installed the OC CLI (
oc
).
Procedure
Log in to your cluster as a cluster administrator using the OpenShift CLI (
oc
).oc login --token=<token> --server=<cluster_url>
$ oc login --token=<token> --server=<cluster_url>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following commands to set up environment variables.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow These commands create environment variables that you can use in this terminal session to pass their values to the command line interface.
Verify that the variable values are set correctly by running the following command:
echo "Cluster name: ${CLUSTER_NAME}
$ echo "Cluster name: ${CLUSTER_NAME} Region: ${REGION} OIDC Endpoint: ${OIDC_ENDPOINT} AWS Account ID: ${AWS_ACCOUNT_ID}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Cluster name: <cluster_id> Region: <region> OIDC Endpoint: oidc.op1.openshiftapps.com/<oidc_id> AWS Account ID: <aws_id>
Cluster name: <cluster_id> Region: <region> OIDC Endpoint: oidc.op1.openshiftapps.com/<oidc_id> AWS Account ID: <aws_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
- Use the same terminal session to continue with AWS Load Balancer Operator installation, to ensure that your environment variables are not lost.
1.1.3. Tag the AWS VPC and subnets リンクのコピーリンクがクリップボードにコピーされました!
You must tag your AWS VPC resources before you install the AWS Load Balancer Operator.
Prerequisites
-
You have installed the AWS CLI (
aws
). -
You have installed the OC CLI (
oc
).
Procedure
Optional: Set up environment variables for AWS VPC resources.
export VPC_ID=<vpc-id> export PUBLIC_SUBNET_IDS="<public-subnet-a-id> <public-subnet-b-id> <public-subnet-c-id>" export PRIVATE_SUBNET_IDS="<private-subnet-a-id> <private-subnet-b-id> <private-subnet-c-id>"
$ export VPC_ID=<vpc-id> $ export PUBLIC_SUBNET_IDS="<public-subnet-a-id> <public-subnet-b-id> <public-subnet-c-id>" $ export PRIVATE_SUBNET_IDS="<private-subnet-a-id> <private-subnet-b-id> <private-subnet-c-id>"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Tag your VPC to associate it with your cluster:
aws ec2 create-tags --resources ${VPC_ID} --tags Key=kubernetes.io/cluster/${CLUSTER_NAME},Value=owned --region ${REGION}
$ aws ec2 create-tags --resources ${VPC_ID} --tags Key=kubernetes.io/cluster/${CLUSTER_NAME},Value=owned --region ${REGION}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Tag your public subnets to allow changes by elastic load balancing roles, and tag your private subnets to allow changes by internal elastic load balancing roles:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the script:
bash ${SCRATCH}/tag-subnets.sh
bash ${SCRATCH}/tag-subnets.sh
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
- To set up a Red Hat OpenShift Service on AWS cluster with multiple availability zones, see A multi-AZ Red Hat OpenShift Service on AWS cluster
1.2. Installing the AWS Load Balancer Operator リンクのコピーリンクがクリップボードにコピーされました!
You can install the AWS Load Balancer Operator using the OpenShift CLI (oc
). Use the same terminal session you used in Setting up your environment to install the AWS Load Balancer Operator to make use of the environment variables.
Procedure
Create a new project within your cluster for the AWS Load Balancer Operator:
oc new-project aws-load-balancer-operator
$ oc new-project aws-load-balancer-operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an AWS IAM policy for the AWS Load Balancer Operator.
Download the appropriate IAM policy:
curl -o ${SCRATCH}/operator-permission-policy.json https://raw.githubusercontent.com/openshift/aws-load-balancer-operator/refs/heads/main/hack/operator-permission-policy.json
$ curl -o ${SCRATCH}/operator-permission-policy.json https://raw.githubusercontent.com/openshift/aws-load-balancer-operator/refs/heads/main/hack/operator-permission-policy.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the permission policy for the Operator:
aws iam create-policy \ --policy-name aws-load-balancer-operator-policy \ --policy-document file://${SCRATCH}/operator-permission-policy.json \ --region ${REGION}
$ aws iam create-policy \ --policy-name aws-load-balancer-operator-policy \ --policy-document file://${SCRATCH}/operator-permission-policy.json \ --region ${REGION}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Take note of the Operator policy ARN in the output. This is referred to as the
$OPERATOR_POLICY_ARN
for the remainder of this process.
Create an AWS IAM role for the AWS Load Balancer Operator:
Create the trust policy for the Operator role:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the Operator role using the trust policy:
aws iam create-role --role-name "${CLUSTER_NAME}-alb-operator" \ --assume-role-policy-document "file://${SCRATCH}/operator-trust-policy.json"
$ aws iam create-role --role-name "${CLUSTER_NAME}-alb-operator" \ --assume-role-policy-document "file://${SCRATCH}/operator-trust-policy.json"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Take note of the Operator role ARN in the output. This is referred to as the
$OPERATOR_ROLE_ARN
for the remainder of this process.Associate the Operator role and policy:
aws iam attach-role-policy --role-name "${CLUSTER_NAME}-alb-operator" \ --policy-arn $OPERATOR_POLICY_ARN
$ aws iam attach-role-policy --role-name "${CLUSTER_NAME}-alb-operator" \ --policy-arn $OPERATOR_POLICY_ARN
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Install the AWS Load Balancer Operator by creating an
OperatorGroup
and aSubscription
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an AWS IAM policy for the AWS Load Balancer Controller.
Download the appropriate IAM policy:
curl -o ${SCRATCH}/controller-permission-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.12.0/docs/install/iam_policy.json
$ curl -o ${SCRATCH}/controller-permission-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.12.0/docs/install/iam_policy.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the permission policy for the Controller:
aws iam create-policy \ --region ${REGION} \ --policy-name aws-load-balancer-controller-policy \ --policy-document file://${SCRATCH}/controller-permission-policy.json
$ aws iam create-policy \ --region ${REGION} \ --policy-name aws-load-balancer-controller-policy \ --policy-document file://${SCRATCH}/controller-permission-policy.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Take note of the Controller policy ARN in the output. This is referred to as the
$CONTROLLER_POLICY_ARN
for the remainder of this process.
Create an AWS IAM role for the AWS Load Balancer Controller:
Create the trust policy for the Controller role:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the Controller role using the trust policy:
CONTROLLER_ROLE_ARN=$(aws iam create-role --role-name "${CLUSTER_NAME}-albo-controller" \ --assume-role-policy-document "file://${SCRATCH}/controller-trust-policy.json" \ --query Role.Arn --output text) echo ${CONTROLLER_ROLE_ARN}
CONTROLLER_ROLE_ARN=$(aws iam create-role --role-name "${CLUSTER_NAME}-albo-controller" \ --assume-role-policy-document "file://${SCRATCH}/controller-trust-policy.json" \ --query Role.Arn --output text) echo ${CONTROLLER_ROLE_ARN}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Take note of the Controller role ARN in the output. This is referred to as the
$CONTROLLER_ROLE_ARN
for the remainder of this process.Associate the Controller role and policy:
aws iam attach-role-policy \ --role-name "${CLUSTER_NAME}-albo-controller" \ --policy-arn ${CONTROLLER_POLICY_ARN}
$ aws iam attach-role-policy \ --role-name "${CLUSTER_NAME}-albo-controller" \ --policy-arn ${CONTROLLER_POLICY_ARN}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Deploy an instance of the AWS Load Balancer Controller:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you get an error here wait a minute and try again, it means the Operator has not completed installing yet.
Confirm that the Operator and Controller pods are both running:
oc -n aws-load-balancer-operator get pods
$ oc -n aws-load-balancer-operator get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you do not see output similar to the following, wait a few moments and retry.
Example output
NAME READY STATUS RESTARTS AGE aws-load-balancer-controller-cluster-6ddf658785-pdp5d 1/1 Running 0 99s aws-load-balancer-operator-controller-manager-577d9ffcb9-w6zqn 2/2 Running 0 2m4s
NAME READY STATUS RESTARTS AGE aws-load-balancer-controller-cluster-6ddf658785-pdp5d 1/1 Running 0 99s aws-load-balancer-operator-controller-manager-577d9ffcb9-w6zqn 2/2 Running 0 2m4s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
1.3. Validating Operator installation リンクのコピーリンクがクリップボードにコピーされました!
Deploy a basic sample application and create ingress and load balancing services to confirm that the AWS Load Balancer Operator and Controller deployed correctly.
Procedure
Create a new project:
oc new-project hello-world
$ oc new-project hello-world
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new
hello-world
application based on thehello-openshift
image:oc new-app -n hello-world --image=docker.io/openshift/hello-openshift
$ oc new-app -n hello-world --image=docker.io/openshift/hello-openshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure a
NodePort
service for an AWS Application Load Balancer (ALB) to connect to:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy an AWS ALB for the application:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Test access to the AWS ALB endpoint for the application:
NoteALB provisioning takes a few minutes. If you receive an error that says
curl: (6) Could not resolve host
, please wait and try again.ALB_INGRESS=$(oc -n hello-world get ingress hello-openshift-alb \ -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') curl "http://${ALB_INGRESS}"
$ ALB_INGRESS=$(oc -n hello-world get ingress hello-openshift-alb \ -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') $ curl "http://${ALB_INGRESS}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Hello OpenShift!
Hello OpenShift!
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy an AWS Network Load Balancer (NLB) for the application:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Test access to the NLB endpoint for the application:
NoteNLB provisioning takes a few minutes. If you receive an error that says
curl: (6) Could not resolve host
, please wait and try again.NLB=$(oc -n hello-world get service hello-openshift-nlb \ -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') curl "http://${NLB}"
$ NLB=$(oc -n hello-world get service hello-openshift-nlb \ -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') $ curl "http://${NLB}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Hello OpenShift!
Hello OpenShift!
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can now delete the sample application and all resources in the
hello-world
namespace.oc delete project hello-world
$ oc delete project hello-world
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.4. Removing the AWS Load Balancer Operator リンクのコピーリンクがクリップボードにコピーされました!
If you no longer need to use the AWS Load Balancer Operator, you can remove the Operator and delete any related roles and policies.
Procedure
Delete the Operator Subscription:
oc delete subscription aws-load-balancer-operator -n aws-load-balancer-operator
$ oc delete subscription aws-load-balancer-operator -n aws-load-balancer-operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Detach and delete the relevant AWS IAM roles:
aws iam detach-role-policy \ --role-name "<cluster-id>-alb-operator" \ --policy-arn <operator-policy-arn> aws iam delete-role \ --role-name "<cluster-id>-alb-operator"
$ aws iam detach-role-policy \ --role-name "<cluster-id>-alb-operator" \ --policy-arn <operator-policy-arn> $ aws iam delete-role \ --role-name "<cluster-id>-alb-operator"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the AWS IAM policy:
aws iam delete-policy --policy-arn <operator-policy-arn>
$ aws iam delete-policy --policy-arn <operator-policy-arn>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 2. DNS Operator in Red Hat OpenShift Service on AWS リンクのコピーリンクがクリップボードにコピーされました!
In Red Hat OpenShift Service on AWS, the DNS Operator deploys and manages a CoreDNS instance to provide a name resolution service to pods inside the cluster, enables DNS-based Kubernetes Service discovery, and resolves internal cluster.local
names.
This Operator is installed on Red Hat OpenShift Service on AWS clusters by default.
2.1. Using DNS forwarding リンクのコピーリンクがクリップボードにコピーされました!
You can use DNS forwarding to override the default forwarding configuration in the /etc/resolv.conf
file in the following ways:
Specify name servers (
spec.servers
) for every zone. If the forwarded zone is the ingress domain managed by Red Hat OpenShift Service on AWS, then the upstream name server must be authorized for the domain.ImportantYou must specify at least one zone. Otherwise, your cluster can lose functionality.
-
Provide a list of upstream DNS servers (
spec.upstreamResolvers
). - Change the default forwarding policy.
A DNS forwarding configuration for the default domain can have both the default servers specified in the /etc/resolv.conf
file and the upstream DNS servers.
Procedure
Modify the DNS Operator object named
default
:oc edit dns.operator/default
$ oc edit dns.operator/default
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After you issue the previous command, the Operator creates and updates the config map named
dns-default
with additional server configuration blocks based onspec.servers
.ImportantWhen specifying values for the
zones
parameter, ensure that you only forward to specific zones, such as your intranet. You must specify at least one zone. Otherwise, your cluster can lose functionality.If none of the servers have a zone that matches the query, then name resolution falls back to the upstream DNS servers.
Configuring DNS forwarding
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Must comply with the
rfc6335
service name syntax. - 2
- Must conform to the definition of a subdomain in the
rfc1123
service name syntax. The cluster domain,cluster.local
, is an invalid subdomain for thezones
field. - 3
- Defines the policy to select upstream resolvers listed in the
forwardPlugin
. Default value isRandom
. You can also use the valuesRoundRobin
, andSequential
. - 4
- A maximum of 15
upstreams
is allowed perforwardPlugin
. - 5
- You can use
upstreamResolvers
to override the default forwarding policy and forward DNS resolution to the specified DNS resolvers (upstream resolvers) for the default domain. If you do not provide any upstream resolvers, the DNS name queries go to the servers declared in/etc/resolv.conf
. - 6
- Determines the order in which upstream servers listed in
upstreams
are selected for querying. You can specify one of these values:Random
,RoundRobin
, orSequential
. The default value isSequential
. - 7
- When omitted, the platform chooses a default, normally the protocol of the original client request. Set to
TCP
to specify that the platform should use TCP for all upstream DNS requests, even if the client request uses UDP. - 8
- Used to configure the transport type, server name, and optional custom CA or CA bundle to use when forwarding DNS requests to an upstream resolver.
- 9
- You can specify two types of
upstreams
:SystemResolvConf
orNetwork
.SystemResolvConf
configures the upstream to use/etc/resolv.conf
andNetwork
defines aNetworkresolver
. You can specify one or both. - 10
- If the specified type is
Network
, you must provide an IP address. Theaddress
field must be a valid IPv4 or IPv6 address. - 11
- If the specified type is
Network
, you can optionally provide a port. Theport
field must have a value between1
and65535
. If you do not specify a port for the upstream, the default port is 853.
Chapter 3. Ingress Operator in Red Hat OpenShift Service on AWS リンクのコピーリンクがクリップボードにコピーされました!
The Ingress Operator implements the IngressController
API and is the component responsible for enabling external access to Red Hat OpenShift Service on AWS cluster services.
This Operator is installed on Red Hat OpenShift Service on AWS clusters by default.
3.1. Red Hat OpenShift Service on AWS Ingress Operator リンクのコピーリンクがクリップボードにコピーされました!
When you create your Red Hat OpenShift Service on AWS cluster, pods and services running on the cluster are each allocated their own IP addresses. The IP addresses are accessible to other pods and services running nearby but are not accessible to outside clients.
The Ingress Operator makes it possible for external clients to access your service by deploying and managing one or more HAProxy-based Ingress Controllers to handle routing.
Red Hat Site Reliability Engineers (SRE) manage the Ingress Operator for Red Hat OpenShift Service on AWS clusters. While you cannot alter the settings for the Ingress Operator, you may view the default Ingress Controller configurations, status, and logs as well as the Ingress Operator status.
3.2. View the default Ingress Controller リンクのコピーリンクがクリップボードにコピーされました!
The Ingress Operator is a core feature of Red Hat OpenShift Service on AWS and is enabled out of the box.
Every new Red Hat OpenShift Service on AWS installation has an ingresscontroller
named default. It can be supplemented with additional Ingress Controllers. If the default ingresscontroller
is deleted, the Ingress Operator will automatically recreate it within a minute.
Procedure
View the default Ingress Controller:
oc describe --namespace=openshift-ingress-operator ingresscontroller/default
$ oc describe --namespace=openshift-ingress-operator ingresscontroller/default
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3. View Ingress Operator status リンクのコピーリンクがクリップボードにコピーされました!
You can view and inspect the status of your Ingress Operator.
Procedure
View your Ingress Operator status:
oc describe clusteroperators/ingress
$ oc describe clusteroperators/ingress
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4. View Ingress Controller logs リンクのコピーリンクがクリップボードにコピーされました!
You can view your Ingress Controller logs.
Procedure
View your Ingress Controller logs:
oc logs --namespace=openshift-ingress-operator deployments/ingress-operator -c <container_name>
$ oc logs --namespace=openshift-ingress-operator deployments/ingress-operator -c <container_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.5. View Ingress Controller status リンクのコピーリンクがクリップボードにコピーされました!
Your can view the status of a particular Ingress Controller.
Procedure
View the status of an Ingress Controller:
oc describe --namespace=openshift-ingress-operator ingresscontroller/<name>
$ oc describe --namespace=openshift-ingress-operator ingresscontroller/<name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.6. Management of default Ingress Controller functions リンクのコピーリンクがクリップボードにコピーされました!
The following table details the components of the default
Ingress Controller managed by the Ingress Operator and whether Red Hat Site Reliability Engineering (SRE) maintains this component on Red Hat OpenShift Service on AWS clusters.
Ingress component | Managed by | Default configuration? |
---|---|---|
Scaling Ingress Controller | SRE | Yes |
Ingress Operator thread count | SRE | Yes |
Ingress Controller access logging | SRE | Yes |
Ingress Controller sharding | SRE | Yes |
Ingress Controller route admission policy | SRE | Yes |
Ingress Controller wildcard routes | SRE | Yes |
Ingress Controller X-Forwarded headers | SRE | Yes |
Ingress Controller route compression | SRE | Yes |
Chapter 4. Ingress Node Firewall Operator in Red Hat OpenShift Service on AWS リンクのコピーリンクがクリップボードにコピーされました!
The Ingress Node Firewall Operator provides a stateless, eBPF-based firewall for managing node-level ingress traffic in Red Hat OpenShift Service on AWS.
4.1. Ingress Node Firewall Operator リンクのコピーリンクがクリップボードにコピーされました!
The Ingress Node Firewall Operator provides ingress firewall rules at a node level by deploying the daemon set to nodes you specify and manage in the firewall configurations. To deploy the daemon set, you create an IngressNodeFirewallConfig
custom resource (CR). The Operator applies the IngressNodeFirewallConfig
CR to create ingress node firewall daemon set daemon
, which run on all nodes that match the nodeSelector
.
You configure rules
of the IngressNodeFirewall
CR and apply them to clusters using the nodeSelector
and setting values to "true".
The Ingress Node Firewall Operator supports only stateless firewall rules.
Network interface controllers (NICs) that do not support native XDP drivers will run at a lower performance.
You must run Ingress Node Firewall Operator on Red Hat OpenShift Service on AWS 4.14 or later or later.
4.2. Installing the Ingress Node Firewall Operator リンクのコピーリンクがクリップボードにコピーされました!
As a cluster administrator, you can install the Ingress Node Firewall Operator by using the Red Hat OpenShift Service on AWS CLI or the web console.
4.2.1. Installing the Ingress Node Firewall Operator using the CLI リンクのコピーリンクがクリップボードにコピーされました!
As a cluster administrator, you can install the Operator using the CLI.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). - You have an account with administrator privileges.
Procedure
To create the
openshift-ingress-node-firewall
namespace, enter the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow To create an
OperatorGroup
CR, enter the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Subscribe to the Ingress Node Firewall Operator.
To create a
Subscription
CR for the Ingress Node Firewall Operator, enter the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
To verify that the Operator is installed, enter the following command:
oc get ip -n openshift-ingress-node-firewall
$ oc get ip -n openshift-ingress-node-firewall
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CSV APPROVAL APPROVED install-5cvnz ingress-node-firewall.4.0-202211122336 Automatic true
NAME CSV APPROVAL APPROVED install-5cvnz ingress-node-firewall.4.0-202211122336 Automatic true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify the version of the Operator, enter the following command:
oc get csv -n openshift-ingress-node-firewall
$ oc get csv -n openshift-ingress-node-firewall
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME DISPLAY VERSION REPLACES PHASE ingress-node-firewall.4.0-202211122336 Ingress Node Firewall Operator 4.0-202211122336 ingress-node-firewall.4.0-202211102047 Succeeded
NAME DISPLAY VERSION REPLACES PHASE ingress-node-firewall.4.0-202211122336 Ingress Node Firewall Operator 4.0-202211122336 ingress-node-firewall.4.0-202211102047 Succeeded
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.2. Installing the Ingress Node Firewall Operator using the web console リンクのコピーリンクがクリップボードにコピーされました!
As a cluster administrator, you can install the Operator using the web console.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). - You have an account with administrator privileges.
Procedure
Install the Ingress Node Firewall Operator:
- In the Red Hat OpenShift Service on AWS web console, click Operators → OperatorHub.
- Select Ingress Node Firewall Operator from the list of available Operators, and then click Install.
- On the Install Operator page, under Installed Namespace, select Operator recommended Namespace.
- Click Install.
Verify that the Ingress Node Firewall Operator is installed successfully:
- Navigate to the Operators → Installed Operators page.
Ensure that Ingress Node Firewall Operator is listed in the openshift-ingress-node-firewall project with a Status of InstallSucceeded.
NoteDuring installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message.
If the Operator does not have a Status of InstallSucceeded, troubleshoot using the following steps:
- Inspect the Operator Subscriptions and Install Plans tabs for any failures or errors under Status.
-
Navigate to the Workloads → Pods page and check the logs for pods in the
openshift-ingress-node-firewall
project. Check the namespace of the YAML file. If the annotation is missing, you can add the annotation
workload.openshift.io/allowed=management
to the Operator namespace with the following command:oc annotate ns/openshift-ingress-node-firewall workload.openshift.io/allowed=management
$ oc annotate ns/openshift-ingress-node-firewall workload.openshift.io/allowed=management
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFor single-node OpenShift clusters, the
openshift-ingress-node-firewall
namespace requires theworkload.openshift.io/allowed=management
annotation.
4.3. Deploying Ingress Node Firewall Operator リンクのコピーリンクがクリップボードにコピーされました!
Prerequisite
- The Ingress Node Firewall Operator is installed.
Procedure
To deploy the Ingress Node Firewall Operator, create a IngressNodeFirewallConfig
custom resource that will deploy the Operator’s daemon set. You can deploy one or multiple IngressNodeFirewall
CRDs to nodes by applying firewall rules.
-
Create the
IngressNodeFirewallConfig
inside theopenshift-ingress-node-firewall
namespace namedingressnodefirewallconfig
. Run the following command to deploy Ingress Node Firewall Operator rules:
oc apply -f rule.yaml
$ oc apply -f rule.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3.1. Ingress Node Firewall configuration object リンクのコピーリンクがクリップボードにコピーされました!
The fields for the Ingress Node Firewall configuration object are described in the following table:
Field | Type | Description |
---|---|---|
|
|
The name of the CR object. The name of the firewall rules object must be |
|
|
Namespace for the Ingress Firewall Operator CR object. The |
|
| A node selection constraint used to target nodes through specified node labels. For example: spec: nodeSelector: node-role.kubernetes.io/worker: ""
Note
One label used in |
|
| Specifies if the Node Ingress Firewall Operator uses the eBPF Manager Operator or not to manage eBPF programs. This capability is a Technology Preview feature. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
To start, the Operator consumes an IngressNodeFirewallConfig
in order to generate the daemonset on all nodes. After this is created, additional firewall rule objects can be created.
4.3.2. Ingress Node Firewall Operator example configuration リンクのコピーリンクがクリップボードにコピーされました!
A complete Ingress Node Firewall Configuration is specified in the following example:
Example of how to create an Ingress Node Firewall Configuration object
The Operator consumes the CR object and creates an ingress node firewall daemon set on all the nodes that match the nodeSelector
.
4.3.3. Ingress Node Firewall rules object リンクのコピーリンクがクリップボードにコピーされました!
The fields for the Ingress Node Firewall rules object are described in the following table:
Field | Type | Description |
---|---|---|
|
| The name of the CR object. |
|
|
The fields for this object specify the interfaces to apply the firewall rules to. For example, |
|
|
You can use |
|
|
|
4.3.3.1. Ingress object configuration リンクのコピーリンクがクリップボードにコピーされました!
The values for the ingress
object are defined in the following table:
Field | Type | Description |
---|---|---|
|
| Allows you to set the CIDR block. You can configure multiple CIDRs from different address families. Note
Different CIDRs allow you to use the same order rule. In the case that there are multiple |
|
|
Ingress firewall
Set Note Ingress firewall rules are verified using a verification webhook that blocks any invalid configuration. The verification webhook prevents you from blocking any critical cluster services such as the API server. |
4.3.3.2. Ingress Node Firewall rules object example リンクのコピーリンクがクリップボードにコピーされました!
A complete Ingress Node Firewall configuration is specified in the following example:
Example Ingress Node Firewall configuration
- 1
- A <label_name> and a <label_value> must exist on the node and must match the
nodeselector
label and value applied to the nodes you want theingressfirewallconfig
CR to run on. The <label_value> can betrue
orfalse
. By usingnodeSelector
labels, you can target separate groups of nodes to apply different rules to using theingressfirewallconfig
CR.
4.3.3.3. Zero trust Ingress Node Firewall rules object example リンクのコピーリンクがクリップボードにコピーされました!
Zero trust Ingress Node Firewall rules can provide additional security to multi-interface clusters. For example, you can use zero trust Ingress Node Firewall rules to drop all traffic on a specific interface except for SSH.
A complete configuration of a zero trust Ingress Node Firewall rule set is specified in the following example:
Users need to add all ports their application will use to their allowlist in the following case to ensure proper functionality.
Example zero trust Ingress Node Firewall rules
eBPF Manager Operator integration is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
4.4. Ingress Node Firewall Operator integration リンクのコピーリンクがクリップボードにコピーされました!
The Ingress Node Firewall uses eBPF programs to implement some of its key firewall functionality. By default these eBPF programs are loaded into the kernel using a mechanism specific to the Ingress Node Firewall. You can configure the Ingress Node Firewall Operator to use the eBPF Manager Operator for loading and managing these programs instead.
When this integration is enabled, the following limitations apply:
- The Ingress Node Firewall Operator uses TCX if XDP is not available and TCX is incompatible with bpfman.
-
The Ingress Node Firewall Operator daemon set pods remain in the
ContainerCreating
state until the firewall rules are applied. - The Ingress Node Firewall Operator daemon set pods run as privileged.
4.5. Configuring Ingress Node Firewall Operator to use the eBPF Manager Operator リンクのコピーリンクがクリップボードにコピーされました!
The Ingress Node Firewall uses eBPF programs to implement some of its key firewall functionality. By default these eBPF programs are loaded into the kernel using a mechanism specific to the Ingress Node Firewall.
As a cluster administrator, you can configure the Ingress Node Firewall Operator to use the eBPF Manager Operator for loading and managing these programs instead, adding additional security and observability functionality.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). - You have an account with administrator privileges.
- You installed the Ingress Node Firewall Operator.
- You have installed the eBPF Manager Operator.
Procedure
Apply the following labels to the
ingress-node-firewall-system
namespace:oc label namespace openshift-ingress-node-firewall \ pod-security.kubernetes.io/enforce=privileged \ pod-security.kubernetes.io/warn=privileged --overwrite
$ oc label namespace openshift-ingress-node-firewall \ pod-security.kubernetes.io/enforce=privileged \ pod-security.kubernetes.io/warn=privileged --overwrite
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
IngressNodeFirewallConfig
object namedingressnodefirewallconfig
and set theebpfProgramManagerMode
field:Ingress Node Firewall Operator configuration object
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<ebpf_mode>
: Specifies whether or not the Ingress Node Firewall Operator uses the eBPF Manager Operator to manage eBPF programs. Must be eithertrue
orfalse
. If unset, eBPF Manager is not used.
4.6. Viewing Ingress Node Firewall Operator rules リンクのコピーリンクがクリップボードにコピーされました!
Procedure
Run the following command to view all current rules :
oc get ingressnodefirewall
$ oc get ingressnodefirewall
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Choose one of the returned
<resource>
names and run the following command to view the rules or configs:oc get <resource> <name> -o yaml
$ oc get <resource> <name> -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.7. Troubleshooting the Ingress Node Firewall Operator リンクのコピーリンクがクリップボードにコピーされました!
Run the following command to list installed Ingress Node Firewall custom resource definitions (CRD):
oc get crds | grep ingressnodefirewall
$ oc get crds | grep ingressnodefirewall
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY UP-TO-DATE AVAILABLE AGE ingressnodefirewallconfigs.ingressnodefirewall.openshift.io 2022-08-25T10:03:01Z ingressnodefirewallnodestates.ingressnodefirewall.openshift.io 2022-08-25T10:03:00Z ingressnodefirewalls.ingressnodefirewall.openshift.io 2022-08-25T10:03:00Z
NAME READY UP-TO-DATE AVAILABLE AGE ingressnodefirewallconfigs.ingressnodefirewall.openshift.io 2022-08-25T10:03:01Z ingressnodefirewallnodestates.ingressnodefirewall.openshift.io 2022-08-25T10:03:00Z ingressnodefirewalls.ingressnodefirewall.openshift.io 2022-08-25T10:03:00Z
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to view the state of the Ingress Node Firewall Operator:
oc get pods -n openshift-ingress-node-firewall
$ oc get pods -n openshift-ingress-node-firewall
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE ingress-node-firewall-controller-manager 2/2 Running 0 5d21h ingress-node-firewall-daemon-pqx56 3/3 Running 0 5d21h
NAME READY STATUS RESTARTS AGE ingress-node-firewall-controller-manager 2/2 Running 0 5d21h ingress-node-firewall-daemon-pqx56 3/3 Running 0 5d21h
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following fields provide information about the status of the Operator:
READY
,STATUS
,AGE
, andRESTARTS
. TheSTATUS
field isRunning
when the Ingress Node Firewall Operator is deploying a daemon set to the assigned nodes.Run the following command to collect all ingress firewall node pods' logs:
oc adm must-gather – gather_ingress_node_firewall
$ oc adm must-gather – gather_ingress_node_firewall
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The logs are available in the sos node’s report containing eBPF
bpftool
outputs at/sos_commands/ebpf
. These reports include lookup tables used or updated as the ingress firewall XDP handles packet processing, updates statistics, and emits events.
Legal Notice
リンクのコピーリンクがクリップボードにコピーされました!
Copyright © 2025 Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.