Search

Networking

download PDF
OpenShift Container Platform 4.17

Configuring and managing cluster networking

Red Hat OpenShift Documentation Team

Abstract

This document provides instructions for configuring and managing your OpenShift Container Platform cluster network, including DNS, ingress, and the Pod network.

Chapter 1. About networking

Red Hat OpenShift Networking is an ecosystem of features, plugins and advanced networking capabilities that extend Kubernetes networking with the advanced networking-related features that your cluster needs to manage its network traffic for one or multiple hybrid clusters. This ecosystem of networking capabilities integrates ingress, egress, load balancing, high-performance throughput, security, inter- and intra-cluster traffic management and provides role-based observability tooling to reduce its natural complexities.

The following list highlights some of the most commonly used Red Hat OpenShift Networking features available on your cluster:

  • Primary cluster network provided by either of the following Container Network Interface (CNI) plugins:

  • Certified 3rd-party alternative primary network plugins
  • Cluster Network Operator for network plugin management
  • Ingress Operator for TLS encrypted web traffic
  • DNS Operator for name assignment
  • MetalLB Operator for traffic load balancing on bare metal clusters
  • IP failover support for high-availability
  • Additional hardware network support through multiple CNI plugins, including for macvlan, ipvlan, and SR-IOV hardware networks
  • IPv4, IPv6, and dual stack addressing
  • Hybrid Linux-Windows host clusters for Windows-based workloads
  • Red Hat OpenShift Service Mesh for discovery, load balancing, service-to-service authentication, failure recovery, metrics, and monitoring of services
  • Single-node OpenShift
  • Network Observability Operator for network debugging and insights
  • Submariner for inter-cluster networking
  • Red Hat Service Interconnect for layer 7 inter-cluster networking

Chapter 2. Understanding networking

Cluster Administrators have several options for exposing applications that run inside a cluster to external traffic and securing network connections:

  • Service types, such as node ports or load balancers
  • API resources, such as Ingress and Route

By default, Kubernetes allocates each pod an internal IP address for applications running within the pod. Pods and their containers can network, but clients outside the cluster do not have networking access. When you expose your application to external traffic, giving each pod its own IP address means that pods can be treated like physical hosts or virtual machines in terms of port allocation, networking, naming, service discovery, load balancing, application configuration, and migration.

Note

Some cloud platforms offer metadata APIs that listen on the 169.254.169.254 IP address, a link-local IP address in the IPv4 169.254.0.0/16 CIDR block.

This CIDR block is not reachable from the pod network. Pods that need access to these IP addresses must be given host network access by setting the spec.hostNetwork field in the pod spec to true.

If you allow a pod host network access, you grant the pod privileged access to the underlying network infrastructure.

2.1. OpenShift Container Platform DNS

If you are running multiple services, such as front-end and back-end services for use with multiple pods, environment variables are created for user names, service IPs, and more so the front-end pods can communicate with the back-end services. If the service is deleted and recreated, a new IP address can be assigned to the service, and requires the front-end pods to be recreated to pick up the updated values for the service IP environment variable. Additionally, the back-end service must be created before any of the front-end pods to ensure that the service IP is generated properly, and that it can be provided to the front-end pods as an environment variable.

For this reason, OpenShift Container Platform has a built-in DNS so that the services can be reached by the service DNS as well as the service IP/port.

2.2. OpenShift Container Platform Ingress Operator

When you create your OpenShift Container Platform cluster, pods and services running on the cluster are each allocated their own IP addresses. The IP addresses are accessible to other pods and services running nearby but are not accessible to outside clients. The Ingress Operator implements the IngressController API and is the component responsible for enabling external access to OpenShift Container Platform cluster services.

The Ingress Operator makes it possible for external clients to access your service by deploying and managing one or more HAProxy-based Ingress Controllers to handle routing. You can use the Ingress Operator to route traffic by specifying OpenShift Container Platform Route and Kubernetes Ingress resources. Configurations within the Ingress Controller, such as the ability to define endpointPublishingStrategy type and internal load balancing, provide ways to publish Ingress Controller endpoints.

2.2.1. Comparing routes and Ingress

The Kubernetes Ingress resource in OpenShift Container Platform implements the Ingress Controller with a shared router service that runs as a pod inside the cluster. The most common way to manage Ingress traffic is with the Ingress Controller. You can scale and replicate this pod like any other regular pod. This router service is based on HAProxy, which is an open source load balancer solution.

The OpenShift Container Platform route provides Ingress traffic to services in the cluster. Routes provide advanced features that might not be supported by standard Kubernetes Ingress Controllers, such as TLS re-encryption, TLS passthrough, and split traffic for blue-green deployments.

Ingress traffic accesses services in the cluster through a route. Routes and Ingress are the main resources for handling Ingress traffic. Ingress provides features similar to a route, such as accepting external requests and delegating them based on the route. However, with Ingress you can only allow certain types of connections: HTTP/2, HTTPS and server name identification (SNI), and TLS with certificate. In OpenShift Container Platform, routes are generated to meet the conditions specified by the Ingress resource.

2.3. Glossary of common terms for OpenShift Container Platform networking

This glossary defines common terms that are used in the networking content.

authentication
To control access to an OpenShift Container Platform cluster, a cluster administrator can configure user authentication and ensure only approved users access the cluster. To interact with an OpenShift Container Platform cluster, you must authenticate to the OpenShift Container Platform API. You can authenticate by providing an OAuth access token or an X.509 client certificate in your requests to the OpenShift Container Platform API.
AWS Load Balancer Operator
The AWS Load Balancer (ALB) Operator deploys and manages an instance of the aws-load-balancer-controller.
Cluster Network Operator
The Cluster Network Operator (CNO) deploys and manages the cluster network components in an OpenShift Container Platform cluster. This includes deployment of the Container Network Interface (CNI) network plugin selected for the cluster during installation.
config map
A config map provides a way to inject configuration data into pods. You can reference the data stored in a config map in a volume of type ConfigMap. Applications running in a pod can use this data.
custom resource (CR)
A CR is extension of the Kubernetes API. You can create custom resources.
DNS
Cluster DNS is a DNS server which serves DNS records for Kubernetes services. Containers started by Kubernetes automatically include this DNS server in their DNS searches.
DNS Operator
The DNS Operator deploys and manages CoreDNS to provide a name resolution service to pods. This enables DNS-based Kubernetes Service discovery in OpenShift Container Platform.
deployment
A Kubernetes resource object that maintains the life cycle of an application.
domain
Domain is a DNS name serviced by the Ingress Controller.
egress
The process of data sharing externally through a network’s outbound traffic from a pod.
External DNS Operator
The External DNS Operator deploys and manages ExternalDNS to provide the name resolution for services and routes from the external DNS provider to OpenShift Container Platform.
HTTP-based route
An HTTP-based route is an unsecured route that uses the basic HTTP routing protocol and exposes a service on an unsecured application port.
Ingress
The Kubernetes Ingress resource in OpenShift Container Platform implements the Ingress Controller with a shared router service that runs as a pod inside the cluster.
Ingress Controller
The Ingress Operator manages Ingress Controllers. Using an Ingress Controller is the most common way to allow external access to an OpenShift Container Platform cluster.
installer-provisioned infrastructure
The installation program deploys and configures the infrastructure that the cluster runs on.
kubelet
A primary node agent that runs on each node in the cluster to ensure that containers are running in a pod.
Kubernetes NMState Operator
The Kubernetes NMState Operator provides a Kubernetes API for performing state-driven network configuration across the OpenShift Container Platform cluster’s nodes with NMState.
kube-proxy
Kube-proxy is a proxy service which runs on each node and helps in making services available to the external host. It helps in forwarding the request to correct containers and is capable of performing primitive load balancing.
load balancers
OpenShift Container Platform uses load balancers for communicating from outside the cluster with services running in the cluster.
MetalLB Operator
As a cluster administrator, you can add the MetalLB Operator to your cluster so that when a service of type LoadBalancer is added to the cluster, MetalLB can add an external IP address for the service.
multicast
With IP multicast, data is broadcast to many IP addresses simultaneously.
namespaces
A namespace isolates specific system resources that are visible to all processes. Inside a namespace, only processes that are members of that namespace can see those resources.
networking
Network information of a OpenShift Container Platform cluster.
node
A worker machine in the OpenShift Container Platform cluster. A node is either a virtual machine (VM) or a physical machine.
OpenShift Container Platform Ingress Operator
The Ingress Operator implements the IngressController API and is the component responsible for enabling external access to OpenShift Container Platform services.
pod
One or more containers with shared resources, such as volume and IP addresses, running in your OpenShift Container Platform cluster. A pod is the smallest compute unit defined, deployed, and managed.
PTP Operator
The PTP Operator creates and manages the linuxptp services.
route
The OpenShift Container Platform route provides Ingress traffic to services in the cluster. Routes provide advanced features that might not be supported by standard Kubernetes Ingress Controllers, such as TLS re-encryption, TLS passthrough, and split traffic for blue-green deployments.
scaling
Increasing or decreasing the resource capacity.
service
Exposes a running application on a set of pods.
Single Root I/O Virtualization (SR-IOV) Network Operator
The Single Root I/O Virtualization (SR-IOV) Network Operator manages the SR-IOV network devices and network attachments in your cluster.
software-defined networking (SDN)
A software-defined networking (SDN) approach to provide a unified cluster network that enables communication between pods across the OpenShift Container Platform cluster.
Stream Control Transmission Protocol (SCTP)
SCTP is a reliable message based protocol that runs on top of an IP network.
taint
Taints and tolerations ensure that pods are scheduled onto appropriate nodes. You can apply one or more taints on a node.
toleration
You can apply tolerations to pods. Tolerations allow the scheduler to schedule pods with matching taints.
web console
A user interface (UI) to manage OpenShift Container Platform.

Chapter 3. Zero trust networking

Zero trust is an approach to designing security architectures based on the premise that every interaction begins in an untrusted state. This contrasts with traditional architectures, which might determine trustworthiness based on whether communication starts inside a firewall. More specifically, zero trust attempts to close gaps in security architectures that rely on implicit trust models and one-time authentication.

OpenShift Container Platform can add some zero trust networking capabilities to containers running on the platform without requiring changes to the containers or the software running in them. There are also several products that Red Hat offers that can further augment the zero trust networking capabilities of containers. If you have the ability to change the software running in the containers, then there are other projects that Red Hat supports that can add further capabilities.

Explore the following targeted capabilities of zero trust networking.

3.1. Root of trust

Public certificates and private keys are critical to zero trust networking. These are used to identify components to one another, authenticate, and to secure traffic. The certificates are signed by other certificates, and there is a chain of trust to a root certificate authority (CA). Everything participating in the network needs to ultimately have the public key for a root CA so that it can validate the chain of trust. For public-facing things, these are usually the set of root CAs that are globally known, and whose keys are distributed with operating systems, web browsers, and so on. However, it is possible to run a private CA for a cluster or a corporation if the certificate of the private CA is distributed to all parties.

Leverage:

  • OpenShift Container Platform: OpenShift creates a cluster CA at installation that is used to secure the cluster resources. However, OpenShift Container Platform can also create and sign certificates for services in the cluster, and can inject the cluster CA bundle into a pod if requested. Service certificates created and signed by OpenShift Container Platform have a 26-month time to live (TTL) and are rotated automatically at 13 months. They can also be rotated manually if necessary.
  • OpenShift cert-manager Operator: cert-manager allows you to request keys that are signed by an external root of trust. There are many configurable issuers to integrate with external issuers, along with ways to run with a delegated signing certificate. The cert-manager API can be used by other software in zero trust networking to request the necessary certificates (for example, Red Hat OpenShift Service Mesh), or can be used directly by customer software.

3.2. Traffic authentication and encryption

Ensure that all traffic on the wire is encrypted and the endpoints are identifiable. An example of this is Mutual TLS, or mTLS, which is a method for mutual authentication.

Leverage:

  • OpenShift Container Platform: With transparent pod-to-pod IPsec, the source and destination of the traffic can be identified by the IP address. There is the capability for egress traffic to be encrypted using IPsec. By using the egress IP feature, the source IP address of the traffic can be used to identify the source of the traffic inside the cluster.
  • Red Hat OpenShift Service Mesh: Provides powerful mTLS capabilities that can transparently augment traffic leaving a pod to provide authentication and encryption.
  • OpenShift cert-manager Operator: Use custom resource definitions (CRDs) to request certificates that can be mounted for your programs to use for SSL/TLS protocols.

3.3. Identification and authentication

After you have the ability to mint certificates using a CA, you can use it to establish trust relationships by verification of the identity of the other end of a connection — either a user or a client machine. This also requires management of certificate lifecycles to limit use if compromised.

Leverage:

  • OpenShift Container Platform: Cluster-signed service certificates to ensure that a client is talking to a trusted endpoint. This requires that the service uses SSL/TLS and that the client uses the cluster CA. The client identity must be provided using some other means.
  • Red Hat Single Sign-On: Provides request authentication integration with enterprise user directories or third-party identity providers.
  • Red Hat OpenShift Service Mesh: Transparent upgrade of connections to mTLS, auto-rotation, custom certificate expiration, and request authentication with JSON web token (JWT).
  • OpenShift cert-manager Operator: Creation and management of certificates for use by your application. Certificates can be controlled by CRDs and mounted as secrets, or your application can be changed to interact directly with the cert-manager API.

3.4. Inter-service authorization

It is critical to be able to control access to services based on the identity of the requester. This is done by the platform and does not require each application to implement it. That allows better auditing and inspection of the policies.

Leverage:

3.5. Transaction-level verification

In addition to the ability to identify and authenticate connections, it is also useful to control access to individual transactions. This can include rate-limiting by source, observability, and semantic validation that a transaction is well formed.

Leverage:

3.6. Risk assessment

As the number of security policies in a cluster increase, visualization of what the policies allow and deny becomes increasingly important. These tools make it easier to create, visualize, and manage cluster security policies.

Leverage:

3.7. Site-wide policy enforcement and distribution

After deploying applications on a cluster, it becomes challenging to manage all of the objects that make up the security rules. It becomes critical to be able to apply site-wide policies and audit the deployed objects for compliance with the policies. This should allow for delegation of some permissions to users and cluster administrators within defined bounds, and should allow for exceptions to the policies if necessary.

Leverage:

3.8. Observability for constant, and retrospective, evaluation

After you have a running cluster, you want to be able to observe the traffic and verify that the traffic comports with the defined rules. This is important for intrusion detection, forensics, and is helpful for operational load management.

Leverage:

3.9. Endpoint security

It is important to be able to trust that the software running the services in your cluster has not been compromised. For example, you might need to ensure that certified images are run on trusted hardware, and have policies to only allow connections to or from an endpoint based on endpoint characteristics.

Leverage:

  • OpenShift Container Platform: Secureboot can ensure that the nodes in the cluster are running trusted software, so the platform itself (including the container runtime) have not been tampered with. You can configure OpenShift Container Platform to only run images that have been signed by certain signatures.
  • Red Hat Trusted Artifact Signer: This can be used in a trusted build chain and produce signed container images.

3.10. Extending trust outside of the cluster

You might want to extend trust outside of the cluster by allowing a cluster to mint CAs for a subdomain. Alternatively, you might want to attest to workload identity in the cluster to a remote endpoint.

Leverage:

  • OpenShift cert-manager Operator: You can use cert-manager to manage delegated CAs so that you can distribute trust across different clusters, or through your organization.
  • Red Hat OpenShift Service Mesh: Can use SPIFFE to provide remote attestation of workloads to endpoints running in remote or local clusters.

Chapter 4. Accessing hosts

Learn how to create a bastion host to access OpenShift Container Platform instances and access the control plane nodes with secure shell (SSH) access.

4.1. Accessing hosts on Amazon Web Services in an installer-provisioned infrastructure cluster

The OpenShift Container Platform installer does not create any public IP addresses for any of the Amazon Elastic Compute Cloud (Amazon EC2) instances that it provisions for your OpenShift Container Platform cluster. To be able to SSH to your OpenShift Container Platform hosts, you must follow this procedure.

Procedure

  1. Create a security group that allows SSH access into the virtual private cloud (VPC) created by the openshift-install command.
  2. Create an Amazon EC2 instance on one of the public subnets the installer created.
  3. Associate a public IP address with the Amazon EC2 instance that you created.

    Unlike with the OpenShift Container Platform installation, you should associate the Amazon EC2 instance you created with an SSH keypair. It does not matter what operating system you choose for this instance, as it will simply serve as an SSH bastion to bridge the internet into your OpenShift Container Platform cluster’s VPC. The Amazon Machine Image (AMI) you use does matter. With Red Hat Enterprise Linux CoreOS (RHCOS), for example, you can provide keys via Ignition, like the installer does.

  4. After you provisioned your Amazon EC2 instance and can SSH into it, you must add the SSH key that you associated with your OpenShift Container Platform installation. This key can be different from the key for the bastion instance, but does not have to be.

    Note

    Direct SSH access is only recommended for disaster recovery. When the Kubernetes API is responsive, run privileged pods instead.

  5. Run oc get nodes, inspect the output, and choose one of the nodes that is a master. The hostname looks similar to ip-10-0-1-163.ec2.internal.
  6. From the bastion SSH host you manually deployed into Amazon EC2, SSH into that control plane host. Ensure that you use the same SSH key you specified during the installation:

    $ ssh -i <ssh-key-path> core@<master-hostname>

Chapter 5. Networking dashboards

Networking metrics are viewable in dashboards within the OpenShift Container Platform web console, under ObserveDashboards.

5.1. Network Observability Operator

If you have the Network Observability Operator installed, you can view network traffic metrics dashboards by selecting the Netobserv dashboard from the Dashboards drop-down list. For more information about metrics available in this Dashboard, see Network Observability metrics dashboards.

5.2. Networking and OVN-Kubernetes dashboard

You can view both general networking metrics as well as OVN-Kubernetes metrics from the dashboard.

To view general networking metrics, select Networking/Linux Subsystem Stats from the Dashboards drop-down list. You can view the following networking metrics from the dashboard: Network Utilisation, Network Saturation, and Network Errors.

To view OVN-Kubernetes metrics select Networking/Infrastructure from the Dashboards drop-down list. You can view the following OVN-Kuberenetes metrics: Networking Configuration, TCP Latency Probes, Control Plane Resources, and Worker Resources.

5.3. Ingress Operator dashboard

You can view networking metrics handled by the Ingress Operator from the dashboard. This includes metrics like the following:

  • Incoming and outgoing bandwidth
  • HTTP error rates
  • HTTP server response latency

To view these Ingress metrics, select Networking/Ingress from the Dashboards drop-down list. You can view Ingress metrics for the following categories: Top 10 Per Route, Top 10 Per Namespace, and Top 10 Per Shard.

Chapter 6. Network security

6.1. Understanding network policy APIs

Kubernetes offers two features that users can use to enforce network security. One feature that allows users to enforce network policy is the NetworkPolicy API that is designed mainly for application developers and namespace tenants to protect their namespaces by creating namespace-scoped policies.

The second feature is AdminNetworkPolicy which consists of two APIs: the AdminNetworkPolicy (ANP) API and the BaselineAdminNetworkPolicy (BANP) API. ANP and BANP are designed for cluster and network administrators to protect their entire cluster by creating cluster-scoped policies. Cluster administrators can use ANPs to enforce non-overridable policies that take precedence over NetworkPolicy objects. Administrators can use BANP to set up and enforce optional cluster-scoped network policy rules that are overridable by users using NetworkPolicy objects when necessary. When used together, ANP, BANP, and network policy can achieve full multi-tenant isolation that administrators can use to secure their cluster.

OVN-Kubernetes CNI in OpenShift Container Platform implements these network policies using Access Control List (ACL) Tiers to evaluate and apply them. ACLs are evaluated in descending order from Tier 1 to Tier 3.

Tier 1 evaluates AdminNetworkPolicy (ANP) objects. Tier 2 evaluates NetworkPolicy objects. Tier 3 evaluates BaselineAdminNetworkPolicy (BANP) objects.

OVK-Kubernetes Access Control List (ACL)

ANPs are evaluated first. When the match is an ANP allow or deny rule, any existing NetworkPolicy and BaselineAdminNetworkPolicy (BANP) objects in the cluster are skipped from evaluation. When the match is an ANP pass rule, then evaluation moves from tier 1 of the ACL to tier 2 where the NetworkPolicy policy is evaluated. If no NetworkPolicy matches the traffic then evaluation moves from tier 2 ACLs to tier 3 ACLs where BANP is evaluated.

6.1.1. Key differences between AdminNetworkPolicy and NetworkPolicy custom resources

The following table explains key differences between the cluster scoped AdminNetworkPolicy API and the namespace scoped NetworkPolicy API.

Policy elementsAdminNetworkPolicyNetworkPolicy

Applicable user

Cluster administrator or equivalent

Namespace owners

Scope

Cluster

Namespaced

Drop traffic

Supported with an explicit Deny action set as a rule.

Supported via implicit Deny isolation at policy creation time.

Delegate traffic

Supported with an Pass action set as a rule.

Not applicable

Allow traffic

Supported with an explicit Allow action set as a rule.

The default action for all rules is to allow.

Rule precedence within the policy

Depends on the order in which they appear within an ANP. The higher the rule’s position the higher the precedence.

Rules are additive

Policy precedence

Among ANPs the priority field sets the order for evaluation. The lower the priority number higher the policy precedence.

There is no policy ordering between policies.

Feature precedence

Evaluated first via tier 1 ACL and BANP is evaluated last via tier 3 ACL.

Enforced after ANP and before BANP, they are evaluated in tier 2 of the ACL.

Matching pod selection

Can apply different rules across namespaces.

Can apply different rules across pods in single namespace.

Cluster egress traffic

Supported via nodes and networks peers

Supported through ipBlock field along with accepted CIDR syntax.

Cluster ingress traffic

Not supported

Not supported

Fully qualified domain names (FQDN) peer support

Not supported

Not supported

Namespace selectors

Supports advanced selection of Namespaces with the use of namespaces.matchLabels field

Supports label based namespace selection with the use of namespaceSelector field

6.2. Admin network policy

6.2.1. OVN-Kubernetes AdminNetworkPolicy

6.2.1.1. AdminNetworkPolicy

An AdminNetworkPolicy (ANP) is a cluster-scoped custom resource definition (CRD). As a OpenShift Container Platform administrator, you can use ANP to secure your network by creating network policies before creating namespaces. Additionally, you can create network policies on a cluster-scoped level that is non-overridable by NetworkPolicy objects.

The key difference between AdminNetworkPolicy and NetworkPolicy objects are that the former is for administrators and is cluster scoped while the latter is for tenant owners and is namespace scoped.

An ANP allows administrators to specify the following:

  • A priority value that determines the order of its evaluation. The lower the value the higher the precedence.
  • A set of pods that consists of a set of namespaces or namespace on which the policy is applied.
  • A list of ingress rules to be applied for all ingress traffic towards the subject.
  • A list of egress rules to be applied for all egress traffic from the subject.
AdminNetworkPolicy example

Example 6.1. Example YAML file for an ANP

apiVersion: policy.networking.k8s.io/v1alpha1
kind: AdminNetworkPolicy
metadata:
  name: sample-anp-deny-pass-rules 1
spec:
  priority: 50 2
  subject:
    namespaces:
      matchLabels:
          kubernetes.io/metadata.name: example.name 3
  ingress: 4
  - name: "deny-all-ingress-tenant-1" 5
    action: "Deny"
    from:
    - pods:
        namespaceSelector:
          matchLabels:
            custom-anp: tenant-1
        podSelector:
          matchLabels:
            custom-anp: tenant-1 6
  egress:7
  - name: "pass-all-egress-to-tenant-1"
    action: "Pass"
    to:
    - pods:
        namespaceSelector:
          matchLabels:
            custom-anp: tenant-1
        podSelector:
          matchLabels:
            custom-anp: tenant-1
1
Specify a name for your ANP.
2
The spec.priority field supports a maximum of 100 ANP in the values of 0-99 in a cluster. The lower the value the higher the precedence. Creating AdminNetworkPolicy with the same priority creates a nondeterministic outcome.
3
Specify the namespace to apply the ANP resource.
4
ANP have both ingress and egress rules. ANP rules for spec.ingress field accepts values of Pass, Deny, and Allow for the action field.
5
Specify a name for the ingress.name.
6
Specify podSelector.matchLabels to select pods within the namespaces selected by namespaceSelector.matchLabels as ingress peers.
7
ANPs have both ingress and egress rules. ANP rules for spec.egress field accepts values of Pass, Deny, and Allow for the action field.

Additional resources

6.2.1.1.1. AdminNetworkPolicy actions for rules

As an administrator, you can set Allow, Deny, or Pass as the action field for your AdminNetworkPolicy rules. Because OVN-Kubernetes uses a tiered ACLs to evaluate network traffic rules, ANP allow you to set very strong policy rules that can only be changed by an administrator modifying them, deleting the rule, or overriding them by setting a higher priority rule.

AdminNetworkPolicy Allow example

The following ANP that is defined at priority 9 ensures all ingress traffic is allowed from the monitoring namespace towards any tenant (all other namespaces) in the cluster.

Example 6.2. Example YAML file for a strong Allow ANP

apiVersion: policy.networking.k8s.io/v1alpha1
kind: AdminNetworkPolicy
metadata:
  name: allow-monitoring
spec:
  priority: 9
  subject:
    namespaces: {} # Use the empty selector with caution because it also selects OpenShift namespaces as well.
  ingress:
  - name: "allow-ingress-from-monitoring"
    action: "Allow"
    from:
    - namespaces:
        matchLabels:
          kubernetes.io/metadata.name: monitoring
# ...

This is an example of a strong Allow ANP because it is non-overridable by all the parties involved. No tenants can block themselves from being monitored using NetworkPolicy objects and the monitoring tenant also has no say in what it can or cannot monitor.

AdminNetworkPolicy Deny example

The following ANP that is defined at priority 5 ensures all ingress traffic from the monitoring namespace is blocked towards restricted tenants (namespaces that have labels security: restricted).

Example 6.3. Example YAML file for a strong Deny ANP

apiVersion: policy.networking.k8s.io/v1alpha1
kind: AdminNetworkPolicy
metadata:
  name: block-monitoring
spec:
  priority: 5
  subject:
    namespaces:
      matchLabels:
        security: restricted
  ingress:
  - name: "deny-ingress-from-monitoring"
    action: "Deny"
    from:
    - namespaces:
        matchLabels:
          kubernetes.io/metadata.name: monitoring
# ...

This is a strong Deny ANP that is non-overridable by all the parties involved. The restricted tenant owners cannot authorize themselves to allow monitoring traffic, and the infrastructure’s monitoring service cannot scrape anything from these sensitive namespaces.

When combined with the strong Allow example, the block-monitoring ANP has a lower priority value giving it higher precedence, which ensures restricted tenants are never monitored.

AdminNetworkPolicy Pass example

The following ANP that is defined at priority 7 ensures all ingress traffic from the monitoring namespace towards internal infrastructure tenants (namespaces that have labels security: internal) are passed on to tier 2 of the ACLs and evaluated by the namespaces’ NetworkPolicy objects.

Example 6.4. Example YAML file for a strong Pass ANP

apiVersion: policy.networking.k8s.io/v1alpha1
kind: AdminNetworkPolicy
metadata:
  name: pass-monitoring
spec:
  priority: 7
  subject:
    namespaces:
      matchLabels:
        security: internal
  ingress:
  - name: "pass-ingress-from-monitoring"
    action: "Pass"
    from:
    - namespaces:
        matchLabels:
          kubernetes.io/metadata.name: monitoring
# ...

This example is a strong Pass action ANP because it delegates the decision to NetworkPolicy objects defined by tenant owners. This pass-monitoring ANP allows all tenant owners grouped at security level internal to choose if their metrics should be scraped by the infrastructures' monitoring service using namespace scoped NetworkPolicy objects.

6.2.2. OVN-Kubernetes BaselineAdminNetworkPolicy

6.2.2.1. BaselineAdminNetworkPolicy

BaselineAdminNetworkPolicy (BANP) is a cluster-scoped custom resource definition (CRD). As a OpenShift Container Platform administrator, you can use BANP to setup and enforce optional baseline network policy rules that are overridable by users using NetworkPolicy objects if need be. Rule actions for BANP are allow or deny.

The BaselineAdminNetworkPolicy resource is a cluster singleton object that can be used as a guardrail policy incase a passed traffic policy does not match any NetworkPolicy objects in the cluster. A BANP can also be used as a default security model that provides guardrails that intra-cluster traffic is blocked by default and a user will need to use NetworkPolicy objects to allow known traffic. You must use default as the name when creating a BANP resource.

A BANP allows administrators to specify:

  • A subject that consists of a set of namespaces or namespace.
  • A list of ingress rules to be applied for all ingress traffic towards the subject.
  • A list of egress rules to be applied for all egress traffic from the subject.
BaselineAdminNetworkPolicy example

Example 6.5. Example YAML file for BANP

apiVersion: policy.networking.k8s.io/v1alpha1
kind: BaselineAdminNetworkPolicy
metadata:
  name: default 1
spec:
  subject:
    namespaces:
      matchLabels:
          kubernetes.io/metadata.name: example.name 2
  ingress: 3
  - name: "deny-all-ingress-from-tenant-1" 4
    action: "Deny"
    from:
    - pods:
        namespaceSelector:
          matchLabels:
            custom-banp: tenant-1 5
        podSelector:
          matchLabels:
            custom-banp: tenant-1 6
  egress:
  - name: "allow-all-egress-to-tenant-1"
    action: "Allow"
    to:
    - pods:
        namespaceSelector:
          matchLabels:
            custom-banp: tenant-1
        podSelector:
          matchLabels:
            custom-banp: tenant-1
1
The policy name must be default because BANP is a singleton object.
2
Specify the namespace to apply the ANP to.
3
BANP have both ingress and egress rules. BANP rules for spec.ingress and spec.egress fields accepts values of Deny and Allow for the action field.
4
Specify a name for the ingress.name
5
Specify the namespaces to select the pods from to apply the BANP resource.
6
Specify podSelector.matchLabels name of the pods to apply the BANP resource.
BaselineAdminNetworkPolicy Deny example

The following BANP singleton ensures that the administrator has set up a default deny policy for all ingress monitoring traffic coming into the tenants at internal security level. When combined with the "AdminNetworkPolicy Pass example", this deny policy acts as a guardrail policy for all ingress traffic that is passed by the ANP pass-monitoring policy.

Example 6.6. Example YAML file for a guardrail Deny rule

apiVersion: policy.networking.k8s.io/v1alpha1
kind: BaselineAdminNetworkPolicy
metadata:
  name: default
spec:
  subject:
    namespaces:
      matchLabels:
        security: internal
  ingress:
  - name: "deny-ingress-from-monitoring"
    action: "Deny"
    from:
    - namespaces:
        matchLabels:
          kubernetes.io/metadata.name: monitoring
# ...

You can use an AdminNetworkPolicy resource with a Pass value for the action field in conjunction with the BaselineAdminNetworkPolicy resource to create a multi-tenant policy. This multi-tenant policy allows one tenant to collect monitoring data on their application while simultaneously not collecting data from a second tenant.

As an administrator, if you apply both the "AdminNetworkPolicy Pass action example" and the "BaselineAdminNetwork Policy Deny example", tenants are then left with the ability to choose to create a NetworkPolicy resource that will be evaluated before the BANP.

For example, Tenant 1 can set up the following NetworkPolicy resource to monitor ingress traffic:

Example 6.7. Example NetworkPolicy

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-monitoring
  namespace: tenant 1
spec:
  podSelector:
  policyTypes:
    - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: monitoring
# ...

In this scenario, Tenant 1’s policy would be evaluated after the "AdminNetworkPolicy Pass action example" and before the "BaselineAdminNetwork Policy Deny example", which denies all ingress monitoring traffic coming into tenants with security level internal. With Tenant 1’s NetworkPolicy object in place, they will be able to collect data on their application. Tenant 2, however, who does not have any NetworkPolicy objects in place, will not be able to collect data. As an administrator, you have not by default monitored internal tenants, but instead, you created a BANP that allows tenants to use NetworkPolicy objects to override the default behavior of your BANP.

6.2.3. Monitoring ANP and BANP

AdminNetworkPolicy and BaselineAdminNetworkPolicy resources have metrics that can be used for monitoring and managing your policies. See the following table for more details on the metrics.

6.2.3.1. Metrics for AdminNetworkPolicy
NameDescriptionExplanation

ovnkube_controller_admin_network_policies

Not applicable

The total number of AdminNetworkPolicy resources in the cluster.

ovnkube_controller_baseline_admin_network_policies

Not applicable

The total number of BaselineAdminNetworkPolicy resources in the cluster. The value should be 0 or 1.

ovnkube_controller_admin_network_policies_rules

  • direction: specifies either Ingress or Egress.
  • action: specifies either Pass, Allow, or Deny.

The total number of rules across all ANP policies in the cluster grouped by direction and action.

ovnkube_controller_baseline_admin_network_policies_rules

  • direction: specifies either Ingress or Egress.
  • action: specifies either Allow or Deny.

The total number of rules across all BANP policies in the cluster grouped by direction and action.

ovnkube_controller_admin_network_policies_db_objects

table_name: specifies either ACL or Address_Set

The total number of OVN Northbound database (nbdb) objects that are created by all the ANP in the cluster grouped by the table_name.

ovnkube_controller_baseline_admin_network_policies_db_objects

table_name: specifies either ACL or Address_Set

The total number of OVN Northbound database (nbdb) objects that are created by all the BANP in the cluster grouped by the table_name.

6.2.4. Egress nodes and networks peer for AdminNetworkPolicy

This section explains nodes and networks peers. Administrators can use the examples in this section to design AdminNetworkPolicy and BaselineAdminNetworkPolicy to control northbound traffic in their cluster.

6.2.4.1. Northbound traffic controls for AdminNetworkPolicy and BaselineAdminNetworkPolicy

In addition to supporting east-west traffic controls, ANP and BANP also allow administrators to control their northbound traffic leaving the cluster or traffic leaving the node to other nodes in the cluster. End-users can do the following:

  • Implement egress traffic control towards cluster nodes using nodes egress peer
  • Implement egress traffic control towards Kubernetes API servers using nodes or networks egress peers
  • Implement egress traffic control towards external destinations outside the cluster using networks peer
Note

For ANP and BANP, nodes and networks peers can be specified for egress rules only.

6.2.4.1.1. Using nodes peer to control egress traffic to cluster nodes

Using the nodes peer administrators can control egress traffic from pods to nodes in the cluster. A benefit of this is that you do not have to change the policy when nodes are added to or deleted from the cluster.

The following example allows egress traffic to the Kubernetes API server on port 6443 by any of the namespaces with a restricted, confidential, or internal level of security using the node selector peer. It also denies traffic to all worker nodes in your cluster from any of the namespaces with a restricted, confidential, or internal level of security.

Example 6.8. Example of ANP Allow egress using nodes peer

apiVersion: policy.networking.k8s.io/v1alpha1
kind: AdminNetworkPolicy
metadata:
  name: egress-security-allow
spec:
  egress:
  - action: Deny
    to:
    - nodes:
        matchExpressions:
        - key: node-role.kubernetes.io/worker
          operator: Exists
  - action: Allow
    name: allow-to-kubernetes-api-server-and-engr-dept-pods
    ports:
    - portNumber:
        port: 6443
        protocol: TCP
    to:
    - nodes: 1
        matchExpressions:
        - key: node-role.kubernetes.io/control-plane
          operator: Exists
    - pods: 2
        namespaceSelector:
          matchLabels:
            dept: engr
        podSelector: {}
  priority: 55
  subject: 3
    namespaces:
      matchExpressions:
      - key: security 4
        operator: In
        values:
        - restricted
        - confidential
        - internal
1
Specifies a node or set of nodes in the cluster using the matchExpressions field.
2
Specifies all the pods labeled with dept: engr.
3
Specifies the subject of the ANP which includes any namespaces that match the labels used by the network policy. The example matches any of the namespaces with restricted, confidential, or internal level of security.
4
Specifies key/value pairs for matchExpressions field.
6.2.4.1.2. Using networks peer to control egress traffic towards external destinations

Cluster administrators can use CIDR ranges in networks peer and apply a policy to control egress traffic leaving from pods and going to a destination configured at the IP address that is within the CIDR range specified with networks field.

The following example uses networks peer and combines ANP and BANP policies to restrict egress traffic.

Important

Use the empty selector ({}) in the namespace field for ANP and BANP with caution. When using an empty selector, it also selects OpenShift namespaces.

If you use values of 0.0.0.0/0 in a ANP or BANP Deny rule, you must set a higher priority ANP Allow rule to necessary destinations before setting the Deny to 0.0.0.0/0.

Example 6.9. Example of ANP and BANP using networks peers

apiVersion: policy.networking.k8s.io/v1alpha1
kind: AdminNetworkPolicy
metadata:
  name: network-as-egress-peer
spec:
  priority: 70
  subject:
    namespaces: {} # Use the empty selector with caution because it also selects OpenShift namespaces as well.
  egress:
  - name: "deny-egress-to-external-dns-servers"
    action: "Deny"
    to:
    - networks:1
      - 8.8.8.8/32
      - 8.8.4.4/32
      - 208.67.222.222/32
    ports:
      - portNumber:
          protocol: UDP
          port: 53
  - name: "allow-all-egress-to-intranet"
    action: "Allow"
    to:
    - networks: 2
      - 89.246.180.0/22
      - 60.45.72.0/22
  - name: "allow-all-intra-cluster-traffic"
    action: "Allow"
    to:
    - namespaces: {} # Use the empty selector with caution because it also selects OpenShift namespaces as well.
  - name: "pass-all-egress-to-internet"
    action: "Pass"
    to:
    - networks:
      - 0.0.0.0/0 3
---
apiVersion: policy.networking.k8s.io/v1alpha1
kind: BaselineAdminNetworkPolicy
metadata:
  name: default
spec:
  subject:
    namespaces: {} # Use the empty selector with caution because it also selects OpenShift namespaces as well.
  egress:
  - name: "deny-all-egress-to-internet"
    action: "Deny"
    to:
    - networks:
      - 0.0.0.0/0 4
---
1
Use networks to specify a range of CIDR networks outside of the cluster.
2
Specifies the CIDR ranges for the intra-cluster traffic from your resources.
3 4
Specifies a Deny egress to everything by setting networks values to 0.0.0.0/0. Make sure you have a higher priority Allow rule to necessary destinations before setting a Deny to 0.0.0.0/0 because this will deny all traffic including to Kubernetes API and DNS servers.

Collectively the network-as-egress-peer ANP and default BANP using networks peers enforces the following egress policy:

  • All pods cannot talk to external DNS servers at the listed IP addresses.
  • All pods can talk to rest of the company’s intranet.
  • All pods can talk to other pods, nodes, and services.
  • All pods cannot talk to the internet. Combining the last ANP Pass rule and the strong BANP Deny rule a guardrail policy is created that secures traffic in the cluster.
6.2.4.1.3. Using nodes peer and networks peer together

Cluster administrators can combine nodes and networks peer in your ANP and BANP policies.

Example 6.10. Example of nodes and networks peer

apiVersion: policy.networking.k8s.io/v1alpha1
kind: AdminNetworkPolicy
metadata:
  name: egress-peer-1 1
spec:
  egress: 2
  - action: "Allow"
    name: "allow-egress"
    to:
    - nodes:
        matchExpressions:
        - key: worker-group
          operator: In
          values:
          - workloads # Egress traffic from nodes with label worker-group: workloads is allowed.
    - networks:
      - 104.154.164.170/32
    - pods:
        namespaceSelector:
          matchLabels:
            apps: external-apps
        podSelector:
          matchLabels:
            app: web # This rule in the policy allows the traffic directed to pods labeled apps: web in projects with apps: external-apps to leave the cluster.
  - action: "Deny"
    name: "deny-egress"
    to:
    - nodes:
        matchExpressions:
        - key: worker-group
          operator: In
          values:
          - infra # Egress traffic from nodes with label worker-group: infra is denied.
    - networks:
      - 104.154.164.160/32 # Egress traffic to this IP address from cluster is denied.
    - pods:
        namespaceSelector:
          matchLabels:
            apps: internal-apps
        podSelector: {}
  - action: "Pass"
    name: "pass-egress"
    to:
    - nodes:
        matchExpressions:
        - key: node-role.kubernetes.io/worker
          operator: Exists # All other egress traffic is passed to NetworkPolicy or BANP for evaluation.
  priority: 30 3
  subject: 4
    namespaces:
      matchLabels:
        apps: all-apps
1
Specifies the name of the policy.
2
For nodes and networks peers, you can only use northbound traffic controls in ANP as egress.
3
Specifies the priority of the ANP, determining the order in which they should be evaluated. Lower priority rules have higher precedence. ANP accepts values of 0-99 with 0 being the highest priority and 99 being the lowest.
4
Specifies the set of pods in the cluster on which the rules of the policy are to be applied. In the example, any pods with the apps: all-apps label across all namespaces are the subject of the policy.

6.2.5. Troubleshooting AdminNetworkPolicy

6.2.5.1. Checking creation of ANP

To check that your AdminNetworkPolicy (ANP) and BaselineAdminNetworkPolicy (BANP) are created correctly, check the status outputs of the following commands: oc describe anp or oc describe banp.

A good status indicates OVN DB plumbing was successful and the SetupSucceeded.

Example 6.11. Example ANP with a good status

...
Conditions:
Last Transition Time:  2024-06-08T20:29:00Z
Message:               Setting up OVN DB plumbing was successful
Reason:                SetupSucceeded
Status:                True
Type:                  Ready-In-Zone-ovn-control-plane Last Transition Time:  2024-06-08T20:29:00Z
Message:               Setting up OVN DB plumbing was successful
Reason:                SetupSucceeded
Status:                True
Type:                  Ready-In-Zone-ovn-worker
Last Transition Time:  2024-06-08T20:29:00Z
Message:               Setting up OVN DB plumbing was successful
Reason:                SetupSucceeded
Status:                True
Type:                  Ready-In-Zone-ovn-worker2
...

If plumbing is unsuccessful, an error is reported from the respective zone controller.

Example 6.12. Example of an ANP with a bad status and error message

...
Status:
  Conditions:
    Last Transition Time:  2024-06-25T12:47:44Z
    Message:               error attempting to add ANP cluster-control with priority 600 because, OVNK only supports priority ranges 0-99
    Reason:                SetupFailed
    Status:                False
    Type:                  Ready-In-Zone-example-worker-1.example.example-org.net
    Last Transition Time:  2024-06-25T12:47:45Z
    Message:               error attempting to add ANP cluster-control with priority 600 because, OVNK only supports priority ranges 0-99
    Reason:                SetupFailed
    Status:                False
    Type:                  Ready-In-Zone-example-worker-0.example.example-org.net
    Last Transition Time:  2024-06-25T12:47:44Z
    Message:               error attempting to add ANP cluster-control with priority 600 because, OVNK only supports priority ranges 0-99
    Reason:                SetupFailed
    Status:                False
    Type:                  Ready-In-Zone-example-ctlplane-1.example.example-org.net
    Last Transition Time:  2024-06-25T12:47:44Z
    Message:               error attempting to add ANP cluster-control with priority 600 because, OVNK only supports priority ranges 0-99
    Reason:                SetupFailed
    Status:                False
    Type:                  Ready-In-Zone-example-ctlplane-2.example.example-org.net
    Last Transition Time:  2024-06-25T12:47:44Z
    Message:               error attempting to add ANP cluster-control with priority 600 because, OVNK only supports priority ranges 0-99
    Reason:                SetupFailed
    Status:                False
    Type:                  Ready-In-Zone-example-ctlplane-0.example.example-org.net
    ```

See the following section for nbctl commands to help troubleshoot unsuccessful policies.

6.2.5.1.1. Using nbctl commands for ANP and BANP

To troubleshoot an unsuccessful setup, start by looking at OVN Northbound database (nbdb) objects including ACL, AdressSet, and Port_Group. To view the nbdb, you need to be inside the pod on that node to view the objects in that node’s database.

Prerequisites

  • Access to the cluster as a user with the cluster-admin role.
  • The OpenShift CLI (oc) installed.
Note

To run ovn nbctl commands in a cluster, you must open a remote shell into the `nbdb`on the relevant node.

The following policy was used to generate outputs.

Example 6.13. AdminNetworkPolicy used to generate outputs

apiVersion: policy.networking.k8s.io/v1alpha1
kind: AdminNetworkPolicy
metadata:
  name: cluster-control
spec:
  priority: 34
  subject:
    namespaces:
      matchLabels:
        anp: cluster-control-anp # Only namespaces with this label have this ANP
  ingress:
  - name: "allow-from-ingress-router" # rule0
    action: "Allow"
    from:
    - namespaces:
        matchLabels:
          policy-group.network.openshift.io/ingress: ""
  - name: "allow-from-monitoring" # rule1
    action: "Allow"
    from:
    - namespaces:
        matchLabels:
          kubernetes.io/metadata.name: openshift-monitoring
    ports:
    - portNumber:
        protocol: TCP
        port: 7564
    - namedPort: "scrape"
  - name: "allow-from-open-tenants" # rule2
    action: "Allow"
    from:
    - namespaces: # open tenants
        matchLabels:
          tenant: open
  - name: "pass-from-restricted-tenants" # rule3
    action: "Pass"
    from:
    - namespaces: # restricted tenants
        matchLabels:
          tenant: restricted
  - name: "default-deny" # rule4
    action: "Deny"
    from:
    - namespaces: {} # Use the empty selector with caution because it also selects OpenShift namespaces as well.
  egress:
  - name: "allow-to-dns" # rule0
    action: "Allow"
    to:
    - pods:
        namespaceSelector:
          matchlabels:
            kubernetes.io/metadata.name: openshift-dns
        podSelector:
          matchlabels:
            app: dns
    ports:
    - portNumber:
        protocol: UDP
        port: 5353
  - name: "allow-to-kapi-server" # rule1
    action: "Allow"
    to:
    - nodes:
        matchExpressions:
        - key: node-role.kubernetes.io/control-plane
          operator: Exists
    ports:
    - portNumber:
        protocol: TCP
        port: 6443
  - name: "allow-to-splunk" # rule2
    action: "Allow"
    to:
    - namespaces:
        matchlabels:
          tenant: splunk
    ports:
    - portNumber:
        protocol: TCP
        port: 8991
    - portNumber:
        protocol: TCP
        port: 8992
  - name: "allow-to-open-tenants-and-intranet-and-worker-nodes" # rule3
    action: "Allow"
    to:
    - nodes: # worker-nodes
        matchExpressions:
        - key: node-role.kubernetes.io/worker
          operator: Exists
    - networks: # intranet
      - 172.29.0.0/30
      - 10.0.54.0/19
      - 10.0.56.38/32
      - 10.0.69.0/24
    - namespaces: # open tenants
        matchLabels:
          tenant: open
  - name: "pass-to-restricted-tenants" # rule4
    action: "Pass"
    to:
    - namespaces: # restricted tenants
        matchLabels:
          tenant: restricted
  - name: "default-deny"
    action: "Deny"
    to:
    - networks:
      - 0.0.0.0/0

Procedure

  1. List pods with node information by running the following command:

    $ oc get pods -n openshift-ovn-kubernetes -owide

    Example output

    NAME                                     READY   STATUS    RESTARTS   AGE   IP           NODE                                       NOMINATED NODE   READINESS GATES
    ovnkube-control-plane-5c95487779-8k9fd   2/2     Running   0          34m   10.0.0.5     ci-ln-0tv5gg2-72292-6sjw5-master-0         <none>           <none>
    ovnkube-control-plane-5c95487779-v2xn8   2/2     Running   0          34m   10.0.0.3     ci-ln-0tv5gg2-72292-6sjw5-master-1         <none>           <none>
    ovnkube-node-524dt                       8/8     Running   0          33m   10.0.0.4     ci-ln-0tv5gg2-72292-6sjw5-master-2         <none>           <none>
    ovnkube-node-gbwr9                       8/8     Running   0          24m   10.0.128.4   ci-ln-0tv5gg2-72292-6sjw5-worker-c-s9gqt   <none>           <none>
    ovnkube-node-h4fpx                       8/8     Running   0          33m   10.0.0.5     ci-ln-0tv5gg2-72292-6sjw5-master-0         <none>           <none>
    ovnkube-node-j4hzw                       8/8     Running   0          24m   10.0.128.2   ci-ln-0tv5gg2-72292-6sjw5-worker-a-hzbh5   <none>           <none>
    ovnkube-node-wdhgv                       8/8     Running   0          33m   10.0.0.3     ci-ln-0tv5gg2-72292-6sjw5-master-1         <none>           <none>
    ovnkube-node-wfncn                       8/8     Running   0          24m   10.0.128.3   ci-ln-0tv5gg2-72292-6sjw5-worker-b-5bb7f   <none>           <none>

  2. Navigate into a pod to look at the northbound database by running the following command:

    $ oc rsh -c nbdb -n openshift-ovn-kubernetes ovnkube-node-524dt
  3. Run the following command to look at the ACLs nbdb:

    $ ovn-nbctl find ACL 'external_ids{>=}{"k8s.ovn.org/owner-type"=AdminNetworkPolicy,"k8s.ovn.org/name"=cluster-control}'
    Where, cluster-control
    Specifies the name of the AdminNetworkPolicy you are troubleshooting.
    AdminNetworkPolicy
    Specifies the type: AdminNetworkPolicy or BaselineAdminNetworkPolicy.

    Example 6.14. Example output for ACLs

    _uuid               : 0d5e4722-b608-4bb1-b625-23c323cc9926
    action              : allow-related
    direction           : to-lport
    external_ids        : {direction=Ingress, gress-index="2", "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:2:None", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy, port-policy-protocol=None}
    label               : 0
    log                 : false
    match               : "outport == @a14645450421485494999 && ((ip4.src == $a13730899355151937870))"
    meter               : acl-logging
    name                : "ANP:cluster-control:Ingress:2"
    options             : {}
    priority            : 26598
    severity            : []
    tier                : 1
    
    _uuid               : b7be6472-df67-439c-8c9c-f55929f0a6e0
    action              : drop
    direction           : from-lport
    external_ids        : {direction=Egress, gress-index="5", "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Egress:5:None", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy, port-policy-protocol=None}
    label               : 0
    log                 : false
    match               : "inport == @a14645450421485494999 && ((ip4.dst == $a11452480169090787059))"
    meter               : acl-logging
    name                : "ANP:cluster-control:Egress:5"
    options             : {apply-after-lb="true"}
    priority            : 26595
    severity            : []
    tier                : 1
    
    _uuid               : 5a6e5bb4-36eb-4209-b8bc-c611983d4624
    action              : pass
    direction           : to-lport
    external_ids        : {direction=Ingress, gress-index="3", "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:3:None", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy, port-policy-protocol=None}
    label               : 0
    log                 : false
    match               : "outport == @a14645450421485494999 && ((ip4.src == $a764182844364804195))"
    meter               : acl-logging
    name                : "ANP:cluster-control:Ingress:3"
    options             : {}
    priority            : 26597
    severity            : []
    tier                : 1
    
    _uuid               : 04f20275-c410-405c-a923-0e677f767889
    action              : pass
    direction           : from-lport
    external_ids        : {direction=Egress, gress-index="4", "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Egress:4:None", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy, port-policy-protocol=None}
    label               : 0
    log                 : false
    match               : "inport == @a14645450421485494999 && ((ip4.dst == $a5972452606168369118))"
    meter               : acl-logging
    name                : "ANP:cluster-control:Egress:4"
    options             : {apply-after-lb="true"}
    priority            : 26596
    severity            : []
    tier                : 1
    
    _uuid               : 4b5d836a-e0a3-4088-825e-f9f0ca58e538
    action              : drop
    direction           : to-lport
    external_ids        : {direction=Ingress, gress-index="4", "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:4:None", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy, port-policy-protocol=None}
    label               : 0
    log                 : false
    match               : "outport == @a14645450421485494999 && ((ip4.src == $a13814616246365836720))"
    meter               : acl-logging
    name                : "ANP:cluster-control:Ingress:4"
    options             : {}
    priority            : 26596
    severity            : []
    tier                : 1
    
    _uuid               : 5d09957d-d2cc-4f5a-9ddd-b97d9d772023
    action              : allow-related
    direction           : from-lport
    external_ids        : {direction=Egress, gress-index="2", "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Egress:2:tcp", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy, port-policy-protocol=tcp}
    label               : 0
    log                 : false
    match               : "inport == @a14645450421485494999 && ((ip4.dst == $a18396736153283155648)) && tcp && tcp.dst=={8991,8992}"
    meter               : acl-logging
    name                : "ANP:cluster-control:Egress:2"
    options             : {apply-after-lb="true"}
    priority            : 26598
    severity            : []
    tier                : 1
    
    _uuid               : 1a68a5ed-e7f9-47d0-b55c-89184d97e81a
    action              : allow-related
    direction           : from-lport
    external_ids        : {direction=Egress, gress-index="1", "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Egress:1:tcp", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy, port-policy-protocol=tcp}
    label               : 0
    log                 : false
    match               : "inport == @a14645450421485494999 && ((ip4.dst == $a10706246167277696183)) && tcp && tcp.dst==6443"
    meter               : acl-logging
    name                : "ANP:cluster-control:Egress:1"
    options             : {apply-after-lb="true"}
    priority            : 26599
    severity            : []
    tier                : 1
    
    _uuid               : aa1a224d-7960-4952-bdfb-35246bafbac8
    action              : allow-related
    direction           : to-lport
    external_ids        : {direction=Ingress, gress-index="1", "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:1:tcp", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy, port-policy-protocol=tcp}
    label               : 0
    log                 : false
    match               : "outport == @a14645450421485494999 && ((ip4.src == $a6786643370959569281)) && tcp && tcp.dst==7564"
    meter               : acl-logging
    name                : "ANP:cluster-control:Ingress:1"
    options             : {}
    priority            : 26599
    severity            : []
    tier                : 1
    
    _uuid               : 1a27d30e-3f96-4915-8ddd-ade7f22c117b
    action              : allow-related
    direction           : from-lport
    external_ids        : {direction=Egress, gress-index="3", "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Egress:3:None", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy, port-policy-protocol=None}
    label               : 0
    log                 : false
    match               : "inport == @a14645450421485494999 && ((ip4.dst == $a10622494091691694581))"
    meter               : acl-logging
    name                : "ANP:cluster-control:Egress:3"
    options             : {apply-after-lb="true"}
    priority            : 26597
    severity            : []
    tier                : 1
    
    _uuid               : b23a087f-08f8-4225-8c27-4a9a9ee0c407
    action              : allow-related
    direction           : from-lport
    external_ids        : {direction=Egress, gress-index="0", "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Egress:0:udp", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy, port-policy-protocol=udp}
    label               : 0
    log                 : false
    match               : "inport == @a14645450421485494999 && ((ip4.dst == $a13517855690389298082)) && udp && udp.dst==5353"
    meter               : acl-logging
    name                : "ANP:cluster-control:Egress:0"
    options             : {apply-after-lb="true"}
    priority            : 26600
    severity            : []
    tier                : 1
    
    _uuid               : d14ed5cf-2e06-496e-8cae-6b76d5dd5ccd
    action              : allow-related
    direction           : to-lport
    external_ids        : {direction=Ingress, gress-index="0", "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:0:None", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy, port-policy-protocol=None}
    label               : 0
    log                 : false
    match               : "outport == @a14645450421485494999 && ((ip4.src == $a14545668191619617708))"
    meter               : acl-logging
    name                : "ANP:cluster-control:Ingress:0"
    options             : {}
    priority            : 26600
    severity            : []
    tier                : 1
    Note

    The outputs for ingress and egress show you the logic of the policy in the ACL. For example, every time a packet matches the provided match the action is taken.

    1. Examine the specific ACL for the rule by running the following command:

      $ ovn-nbctl find ACL 'external_ids{>=}{"k8s.ovn.org/owner-type"=AdminNetworkPolicy,direction=Ingress,"k8s.ovn.org/name"=cluster-control,gress-index="1"}'
      Where, cluster-control
      Specifies the name of your ANP.
      Ingress
      Specifies the direction of traffic either of type Ingress or Egress.
      1
      Specifies the rule you want to look at.

      For the example ANP named cluster-control at priority 34, the following is an example output for Ingress rule 1:

      Example 6.15. Example output

      _uuid               : aa1a224d-7960-4952-bdfb-35246bafbac8
      action              : allow-related
      direction           : to-lport
      external_ids        : {direction=Ingress, gress-index="1", "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:1:tcp", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy, port-policy-protocol=tcp}
      label               : 0
      log                 : false
      match               : "outport == @a14645450421485494999 && ((ip4.src == $a6786643370959569281)) && tcp && tcp.dst==7564"
      meter               : acl-logging
      name                : "ANP:cluster-control:Ingress:1"
      options             : {}
      priority            : 26599
      severity            : []
      tier                : 1
  4. Run the following command to look at address sets in the nbdb:

    $ ovn-nbctl find Address_Set 'external_ids{>=}{"k8s.ovn.org/owner-type"=AdminNetworkPolicy,"k8s.ovn.org/name"=cluster-control}'

    Example 6.16. Example outputs for Address_Set

    _uuid               : 56e89601-5552-4238-9fc3-8833f5494869
    addresses           : ["192.168.194.135", "192.168.194.152", "192.168.194.193", "192.168.194.254"]
    external_ids        : {direction=Egress, gress-index="1", ip-family=v4, "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Egress:1:v4", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy}
    name                : a10706246167277696183
    
    _uuid               : 7df9330d-380b-4bdb-8acd-4eddeda2419c
    addresses           : ["10.132.0.10", "10.132.0.11", "10.132.0.12", "10.132.0.13", "10.132.0.14", "10.132.0.15", "10.132.0.16", "10.132.0.17", "10.132.0.5", "10.132.0.7", "10.132.0.71", "10.132.0.75", "10.132.0.8", "10.132.0.81", "10.132.0.9", "10.132.2.10", "10.132.2.11", "10.132.2.12", "10.132.2.14", "10.132.2.15", "10.132.2.3", "10.132.2.4", "10.132.2.5", "10.132.2.6", "10.132.2.7", "10.132.2.8", "10.132.2.9", "10.132.3.64", "10.132.3.65", "10.132.3.72", "10.132.3.73", "10.132.3.76", "10.133.0.10", "10.133.0.11", "10.133.0.12", "10.133.0.13", "10.133.0.14", "10.133.0.15", "10.133.0.16", "10.133.0.17", "10.133.0.18", "10.133.0.19", "10.133.0.20", "10.133.0.21", "10.133.0.22", "10.133.0.23", "10.133.0.24", "10.133.0.25", "10.133.0.26", "10.133.0.27", "10.133.0.28", "10.133.0.29", "10.133.0.30", "10.133.0.31", "10.133.0.32", "10.133.0.33", "10.133.0.34", "10.133.0.35", "10.133.0.36", "10.133.0.37", "10.133.0.38", "10.133.0.39", "10.133.0.40", "10.133.0.41", "10.133.0.42", "10.133.0.44", "10.133.0.45", "10.133.0.46", "10.133.0.47", "10.133.0.48", "10.133.0.5", "10.133.0.6", "10.133.0.7", "10.133.0.8", "10.133.0.9", "10.134.0.10", "10.134.0.11", "10.134.0.12", "10.134.0.13", "10.134.0.14", "10.134.0.15", "10.134.0.16", "10.134.0.17", "10.134.0.18", "10.134.0.19", "10.134.0.20", "10.134.0.21", "10.134.0.22", "10.134.0.23", "10.134.0.24", "10.134.0.25", "10.134.0.26", "10.134.0.27", "10.134.0.28", "10.134.0.30", "10.134.0.31", "10.134.0.32", "10.134.0.33", "10.134.0.34", "10.134.0.35", "10.134.0.36", "10.134.0.37", "10.134.0.38", "10.134.0.4", "10.134.0.42", "10.134.0.9", "10.135.0.10", "10.135.0.11", "10.135.0.12", "10.135.0.13", "10.135.0.14", "10.135.0.15", "10.135.0.16", "10.135.0.17", "10.135.0.18", "10.135.0.19", "10.135.0.23", "10.135.0.24", "10.135.0.26", "10.135.0.27", "10.135.0.29", "10.135.0.3", "10.135.0.4", "10.135.0.40", "10.135.0.41", "10.135.0.42", "10.135.0.43", "10.135.0.44", "10.135.0.5", "10.135.0.6", "10.135.0.7", "10.135.0.8", "10.135.0.9"]
    external_ids        : {direction=Ingress, gress-index="4", ip-family=v4, "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:4:v4", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy}
    name                : a13814616246365836720
    
    _uuid               : 84d76f13-ad95-4c00-8329-a0b1d023c289
    addresses           : ["10.132.3.76", "10.135.0.44"]
    external_ids        : {direction=Egress, gress-index="4", ip-family=v4, "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Egress:4:v4", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy}
    name                : a5972452606168369118
    
    _uuid               : 0c53e917-f7ee-4256-8f3a-9522c0481e52
    addresses           : ["10.132.0.10", "10.132.0.11", "10.132.0.12", "10.132.0.13", "10.132.0.14", "10.132.0.15", "10.132.0.16", "10.132.0.17", "10.132.0.5", "10.132.0.7", "10.132.0.71", "10.132.0.75", "10.132.0.8", "10.132.0.81", "10.132.0.9", "10.132.2.10", "10.132.2.11", "10.132.2.12", "10.132.2.14", "10.132.2.15", "10.132.2.3", "10.132.2.4", "10.132.2.5", "10.132.2.6", "10.132.2.7", "10.132.2.8", "10.132.2.9", "10.132.3.64", "10.132.3.65", "10.132.3.72", "10.132.3.73", "10.132.3.76", "10.133.0.10", "10.133.0.11", "10.133.0.12", "10.133.0.13", "10.133.0.14", "10.133.0.15", "10.133.0.16", "10.133.0.17", "10.133.0.18", "10.133.0.19", "10.133.0.20", "10.133.0.21", "10.133.0.22", "10.133.0.23", "10.133.0.24", "10.133.0.25", "10.133.0.26", "10.133.0.27", "10.133.0.28", "10.133.0.29", "10.133.0.30", "10.133.0.31", "10.133.0.32", "10.133.0.33", "10.133.0.34", "10.133.0.35", "10.133.0.36", "10.133.0.37", "10.133.0.38", "10.133.0.39", "10.133.0.40", "10.133.0.41", "10.133.0.42", "10.133.0.44", "10.133.0.45", "10.133.0.46", "10.133.0.47", "10.133.0.48", "10.133.0.5", "10.133.0.6", "10.133.0.7", "10.133.0.8", "10.133.0.9", "10.134.0.10", "10.134.0.11", "10.134.0.12", "10.134.0.13", "10.134.0.14", "10.134.0.15", "10.134.0.16", "10.134.0.17", "10.134.0.18", "10.134.0.19", "10.134.0.20", "10.134.0.21", "10.134.0.22", "10.134.0.23", "10.134.0.24", "10.134.0.25", "10.134.0.26", "10.134.0.27", "10.134.0.28", "10.134.0.30", "10.134.0.31", "10.134.0.32", "10.134.0.33", "10.134.0.34", "10.134.0.35", "10.134.0.36", "10.134.0.37", "10.134.0.38", "10.134.0.4", "10.134.0.42", "10.134.0.9", "10.135.0.10", "10.135.0.11", "10.135.0.12", "10.135.0.13", "10.135.0.14", "10.135.0.15", "10.135.0.16", "10.135.0.17", "10.135.0.18", "10.135.0.19", "10.135.0.23", "10.135.0.24", "10.135.0.26", "10.135.0.27", "10.135.0.29", "10.135.0.3", "10.135.0.4", "10.135.0.40", "10.135.0.41", "10.135.0.42", "10.135.0.43", "10.135.0.44", "10.135.0.5", "10.135.0.6", "10.135.0.7", "10.135.0.8", "10.135.0.9"]
    external_ids        : {direction=Egress, gress-index="2", ip-family=v4, "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Egress:2:v4", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy}
    name                : a18396736153283155648
    
    _uuid               : 5228bf1b-dfd8-40ec-bfa8-95c5bf9aded9
    addresses           : []
    external_ids        : {direction=Ingress, gress-index="0", ip-family=v4, "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:0:v4", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy}
    name                : a14545668191619617708
    
    _uuid               : 46530d69-70da-4558-8c63-884ec9dc4f25
    addresses           : ["10.132.2.10", "10.132.2.5", "10.132.2.6", "10.132.2.7", "10.132.2.8", "10.132.2.9", "10.133.0.47", "10.134.0.33", "10.135.0.10", "10.135.0.11", "10.135.0.12", "10.135.0.19", "10.135.0.24", "10.135.0.7", "10.135.0.8", "10.135.0.9"]
    external_ids        : {direction=Ingress, gress-index="1", ip-family=v4, "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:1:v4", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy}
    name                : a6786643370959569281
    
    _uuid               : 65fdcdea-0b9f-4318-9884-1b51d231ad1d
    addresses           : ["10.132.3.72", "10.135.0.42"]
    external_ids        : {direction=Ingress, gress-index="2", ip-family=v4, "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:2:v4", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy}
    name                : a13730899355151937870
    
    _uuid               : 73eabdb0-36bf-4ca3-b66d-156ac710df4c
    addresses           : ["10.0.32.0/19", "10.0.56.38/32", "10.0.69.0/24", "10.132.3.72", "10.135.0.42", "172.29.0.0/30", "192.168.194.103", "192.168.194.2"]
    external_ids        : {direction=Egress, gress-index="3", ip-family=v4, "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Egress:3:v4", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy}
    name                : a10622494091691694581
    
    _uuid               : 50cdbef2-71b5-474b-914c-6fcd1d7712d3
    addresses           : ["10.132.0.10", "10.132.0.11", "10.132.0.12", "10.132.0.13", "10.132.0.14", "10.132.0.15", "10.132.0.16", "10.132.0.17", "10.132.0.5", "10.132.0.7", "10.132.0.71", "10.132.0.75", "10.132.0.8", "10.132.0.81", "10.132.0.9", "10.132.2.10", "10.132.2.11", "10.132.2.12", "10.132.2.14", "10.132.2.15", "10.132.2.3", "10.132.2.4", "10.132.2.5", "10.132.2.6", "10.132.2.7", "10.132.2.8", "10.132.2.9", "10.132.3.64", "10.132.3.65", "10.132.3.72", "10.132.3.73", "10.132.3.76", "10.133.0.10", "10.133.0.11", "10.133.0.12", "10.133.0.13", "10.133.0.14", "10.133.0.15", "10.133.0.16", "10.133.0.17", "10.133.0.18", "10.133.0.19", "10.133.0.20", "10.133.0.21", "10.133.0.22", "10.133.0.23", "10.133.0.24", "10.133.0.25", "10.133.0.26", "10.133.0.27", "10.133.0.28", "10.133.0.29", "10.133.0.30", "10.133.0.31", "10.133.0.32", "10.133.0.33", "10.133.0.34", "10.133.0.35", "10.133.0.36", "10.133.0.37", "10.133.0.38", "10.133.0.39", "10.133.0.40", "10.133.0.41", "10.133.0.42", "10.133.0.44", "10.133.0.45", "10.133.0.46", "10.133.0.47", "10.133.0.48", "10.133.0.5", "10.133.0.6", "10.133.0.7", "10.133.0.8", "10.133.0.9", "10.134.0.10", "10.134.0.11", "10.134.0.12", "10.134.0.13", "10.134.0.14", "10.134.0.15", "10.134.0.16", "10.134.0.17", "10.134.0.18", "10.134.0.19", "10.134.0.20", "10.134.0.21", "10.134.0.22", "10.134.0.23", "10.134.0.24", "10.134.0.25", "10.134.0.26", "10.134.0.27", "10.134.0.28", "10.134.0.30", "10.134.0.31", "10.134.0.32", "10.134.0.33", "10.134.0.34", "10.134.0.35", "10.134.0.36", "10.134.0.37", "10.134.0.38", "10.134.0.4", "10.134.0.42", "10.134.0.9", "10.135.0.10", "10.135.0.11", "10.135.0.12", "10.135.0.13", "10.135.0.14", "10.135.0.15", "10.135.0.16", "10.135.0.17", "10.135.0.18", "10.135.0.19", "10.135.0.23", "10.135.0.24", "10.135.0.26", "10.135.0.27", "10.135.0.29", "10.135.0.3", "10.135.0.4", "10.135.0.40", "10.135.0.41", "10.135.0.42", "10.135.0.43", "10.135.0.44", "10.135.0.5", "10.135.0.6", "10.135.0.7", "10.135.0.8", "10.135.0.9"]
    external_ids        : {direction=Egress, gress-index="0", ip-family=v4, "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Egress:0:v4", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy}
    name                : a13517855690389298082
    
    _uuid               : 32a42f32-2d11-43dd-979d-a56d7ee6aa57
    addresses           : ["10.132.3.76", "10.135.0.44"]
    external_ids        : {direction=Ingress, gress-index="3", ip-family=v4, "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:3:v4", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy}
    name                : a764182844364804195
    
    _uuid               : 8fd3b977-6e1c-47aa-82b7-e3e3136c4a72
    addresses           : ["0.0.0.0/0"]
    external_ids        : {direction=Egress, gress-index="5", ip-family=v4, "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Egress:5:v4", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy}
    name                : a11452480169090787059
    1. Examine the specific address set of the rule by running the following command:

      $ ovn-nbctl find Address_Set 'external_ids{>=}{"k8s.ovn.org/owner-type"=AdminNetworkPolicy,direction=Egress,"k8s.ovn.org/name"=cluster-control,gress-index="5"}'

      Example 6.17. Example outputs for Address_Set

      _uuid               : 8fd3b977-6e1c-47aa-82b7-e3e3136c4a72
      addresses           : ["0.0.0.0/0"]
      external_ids        : {direction=Egress, gress-index="5", ip-family=v4, "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Egress:5:v4", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy}
      name                : a11452480169090787059
  5. Run the following command to look at the port groups in the nbdb:

    $ ovn-nbctl find Port_Group 'external_ids{>=}{"k8s.ovn.org/owner-type"=AdminNetworkPolicy,"k8s.ovn.org/name"=cluster-control}'

    Example 6.18. Example outputs for Port_Group

    _uuid               : f50acf71-7488-4b9a-b7b8-c8a024e99d21
    acls                : [04f20275-c410-405c-a923-0e677f767889, 0d5e4722-b608-4bb1-b625-23c323cc9926, 1a27d30e-3f96-4915-8ddd-ade7f22c117b, 1a68a5ed-e7f9-47d0-b55c-89184d97e81a, 4b5d836a-e0a3-4088-825e-f9f0ca58e538, 5a6e5bb4-36eb-4209-b8bc-c611983d4624, 5d09957d-d2cc-4f5a-9ddd-b97d9d772023, aa1a224d-7960-4952-bdfb-35246bafbac8, b23a087f-08f8-4225-8c27-4a9a9ee0c407, b7be6472-df67-439c-8c9c-f55929f0a6e0, d14ed5cf-2e06-496e-8cae-6b76d5dd5ccd]
    external_ids        : {"k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy}
    name                : a14645450421485494999
    ports               : [5e75f289-8273-4f8a-8798-8c10f7318833, de7e1b71-6184-445d-93e7-b20acadf41ea]
6.2.5.2. Additional resources

6.3. Network policy

6.3.1. About network policy

As a developer, you can define network policies that restrict traffic to pods in your cluster.

6.3.1.1. About network policy

By default, all pods in a project are accessible from other pods and network endpoints. To isolate one or more pods in a project, you can create NetworkPolicy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete NetworkPolicy objects within their own project.

If a pod is matched by selectors in one or more NetworkPolicy objects, then the pod will accept only connections that are allowed by at least one of those NetworkPolicy objects. A pod that is not selected by any NetworkPolicy objects is fully accessible.

A network policy applies to only the TCP, UDP, ICMP, and SCTP protocols. Other protocols are not affected.

Warning

Network policy does not apply to the host network namespace. Pods with host networking enabled are unaffected by network policy rules. However, pods connecting to the host-networked pods might be affected by the network policy rules.

Network policies cannot block traffic from localhost or from their resident nodes.

The following example NetworkPolicy objects demonstrate supporting different scenarios:

  • Deny all traffic:

    To make a project deny by default, add a NetworkPolicy object that matches all pods but accepts no traffic:

    kind: NetworkPolicy
    apiVersion: networking.k8s.io/v1
    metadata:
      name: deny-by-default
    spec:
      podSelector: {}
      ingress: []
  • Only allow connections from the OpenShift Container Platform Ingress Controller:

    To make a project allow only connections from the OpenShift Container Platform Ingress Controller, add the following NetworkPolicy object.

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: allow-from-openshift-ingress
    spec:
      ingress:
      - from:
        - namespaceSelector:
            matchLabels:
              network.openshift.io/policy-group: ingress
      podSelector: {}
      policyTypes:
      - Ingress
  • Only accept connections from pods within a project:

    Important

    To allow ingress connections from hostNetwork pods in the same namespace, you need to apply the allow-from-hostnetwork policy together with the allow-same-namespace policy.

    To make pods accept connections from other pods in the same project, but reject all other connections from pods in other projects, add the following NetworkPolicy object:

    kind: NetworkPolicy
    apiVersion: networking.k8s.io/v1
    metadata:
      name: allow-same-namespace
    spec:
      podSelector: {}
      ingress:
      - from:
        - podSelector: {}
  • Only allow HTTP and HTTPS traffic based on pod labels:

    To enable only HTTP and HTTPS access to the pods with a specific label (role=frontend in following example), add a NetworkPolicy object similar to the following:

    kind: NetworkPolicy
    apiVersion: networking.k8s.io/v1
    metadata:
      name: allow-http-and-https
    spec:
      podSelector:
        matchLabels:
          role: frontend
      ingress:
      - ports:
        - protocol: TCP
          port: 80
        - protocol: TCP
          port: 443
  • Accept connections by using both namespace and pod selectors:

    To match network traffic by combining namespace and pod selectors, you can use a NetworkPolicy object similar to the following:

    kind: NetworkPolicy
    apiVersion: networking.k8s.io/v1
    metadata:
      name: allow-pod-and-namespace-both
    spec:
      podSelector:
        matchLabels:
          name: test-pods
      ingress:
        - from:
          - namespaceSelector:
              matchLabels:
                project: project_name
            podSelector:
              matchLabels:
                name: test-pods

NetworkPolicy objects are additive, which means you can combine multiple NetworkPolicy objects together to satisfy complex network requirements.

For example, for the NetworkPolicy objects defined in previous samples, you can define both allow-same-namespace and allow-http-and-https policies within the same project. Thus allowing the pods with the label role=frontend, to accept any connection allowed by each policy. That is, connections on any port from pods in the same namespace, and connections on ports 80 and 443 from pods in any namespace.

6.3.1.1.1. Using the allow-from-router network policy

Use the following NetworkPolicy to allow external traffic regardless of the router configuration:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-from-router
spec:
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          policy-group.network.openshift.io/ingress: ""1
  podSelector: {}
  policyTypes:
  - Ingress
1
policy-group.network.openshift.io/ingress:"" label supports OVN-Kubernetes.
6.3.1.1.2. Using the allow-from-hostnetwork network policy

Add the following allow-from-hostnetwork NetworkPolicy object to direct traffic from the host network pods.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-from-hostnetwork
spec:
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          policy-group.network.openshift.io/host-network: ""
  podSelector: {}
  policyTypes:
  - Ingress
6.3.1.2. Optimizations for network policy with OVN-Kubernetes network plugin

When designing your network policy, refer to the following guidelines:

  • For network policies with the same spec.podSelector spec, it is more efficient to use one network policy with multiple ingress or egress rules, than multiple network policies with subsets of ingress or egress rules.
  • Every ingress or egress rule based on the podSelector or namespaceSelector spec generates the number of OVS flows proportional to number of pods selected by network policy + number of pods selected by ingress or egress rule. Therefore, it is preferable to use the podSelector or namespaceSelector spec that can select as many pods as you need in one rule, instead of creating individual rules for every pod.

    For example, the following policy contains two rules:

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: test-network-policy
    spec:
      podSelector: {}
      ingress:
      - from:
        - podSelector:
            matchLabels:
              role: frontend
      - from:
        - podSelector:
            matchLabels:
              role: backend

    The following policy expresses those same two rules as one:

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: test-network-policy
    spec:
      podSelector: {}
      ingress:
      - from:
        - podSelector:
            matchExpressions:
            - {key: role, operator: In, values: [frontend, backend]}

    The same guideline applies to the spec.podSelector spec. If you have the same ingress or egress rules for different network policies, it might be more efficient to create one network policy with a common spec.podSelector spec. For example, the following two policies have different rules:

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: policy1
    spec:
      podSelector:
        matchLabels:
          role: db
      ingress:
      - from:
        - podSelector:
            matchLabels:
              role: frontend
    ---
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: policy2
    spec:
      podSelector:
        matchLabels:
          role: client
      ingress:
      - from:
        - podSelector:
            matchLabels:
              role: frontend

    The following network policy expresses those same two rules as one:

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: policy3
    spec:
      podSelector:
        matchExpressions:
        - {key: role, operator: In, values: [db, client]}
      ingress:
      - from:
        - podSelector:
            matchLabels:
              role: frontend

    You can apply this optimization when only multiple selectors are expressed as one. In cases where selectors are based on different labels, it may not be possible to apply this optimization. In those cases, consider applying some new labels for network policy optimization specifically.

6.3.1.3. Next steps
6.3.1.4. Additional resources

6.3.2. Creating a network policy

As a user with the admin role, you can create a network policy for a namespace.

6.3.2.1. Example NetworkPolicy object

The following annotates an example NetworkPolicy object:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-27107 1
spec:
  podSelector: 2
    matchLabels:
      app: mongodb
  ingress:
  - from:
    - podSelector: 3
        matchLabels:
          app: app
    ports: 4
    - protocol: TCP
      port: 27017
1
The name of the NetworkPolicy object.
2
A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object.
3
A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy.
4
A list of one or more destination ports on which to accept traffic.
6.3.2.2. Creating a network policy using the CLI

To define granular rules describing ingress or egress network traffic allowed for namespaces in your cluster, you can create a network policy.

Note

If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster.

Prerequisites

  • Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin, with mode: NetworkPolicy set.
  • You installed the OpenShift CLI (oc).
  • You are logged in to the cluster with a user with admin privileges.
  • You are working in the namespace that the network policy applies to.

Procedure

  1. Create a policy rule:

    1. Create a <policy_name>.yaml file:

      $ touch <policy_name>.yaml

      where:

      <policy_name>
      Specifies the network policy file name.
    2. Define a network policy in the file that you just created, such as in the following examples:

      Deny ingress from all pods in all namespaces

      This is a fundamental policy, blocking all cross-pod networking other than cross-pod traffic allowed by the configuration of other Network Policies.

      kind: NetworkPolicy
      apiVersion: networking.k8s.io/v1
      metadata:
        name: deny-by-default
      spec:
        podSelector: {}
        policyTypes:
        - Ingress
        ingress: []

      Allow ingress from all pods in the same namespace

      kind: NetworkPolicy
      apiVersion: networking.k8s.io/v1
      metadata:
        name: allow-same-namespace
      spec:
        podSelector:
        ingress:
        - from:
          - podSelector: {}

      Allow ingress traffic to one pod from a particular namespace

      This policy allows traffic to pods labelled pod-a from pods running in namespace-y.

      kind: NetworkPolicy
      apiVersion: networking.k8s.io/v1
      metadata:
        name: allow-traffic-pod
      spec:
        podSelector:
         matchLabels:
            pod: pod-a
        policyTypes:
        - Ingress
        ingress:
        - from:
          - namespaceSelector:
              matchLabels:
                 kubernetes.io/metadata.name: namespace-y
  2. To create the network policy object, enter the following command:

    $ oc apply -f <policy_name>.yaml -n <namespace>

    where:

    <policy_name>
    Specifies the network policy file name.
    <namespace>
    Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.

    Example output

    networkpolicy.networking.k8s.io/deny-by-default created

Note

If you log in to the web console with cluster-admin privileges, you have a choice of creating a network policy in any namespace in the cluster directly in YAML or from a form in the web console.

6.3.2.3. Creating a default deny all network policy

This is a fundamental policy, blocking all cross-pod networking other than network traffic allowed by the configuration of other deployed network policies. This procedure enforces a default deny-by-default policy.

Note

If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster.

Prerequisites

  • Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin, with mode: NetworkPolicy set.
  • You installed the OpenShift CLI (oc).
  • You are logged in to the cluster with a user with admin privileges.
  • You are working in the namespace that the network policy applies to.

Procedure

  1. Create the following YAML that defines a deny-by-default policy to deny ingress from all pods in all namespaces. Save the YAML in the deny-by-default.yaml file:

    kind: NetworkPolicy
    apiVersion: networking.k8s.io/v1
    metadata:
      name: deny-by-default
      namespace: default 1
    spec:
      podSelector: {} 2
      ingress: [] 3
    1
    namespace: default deploys this policy to the default namespace.
    2
    podSelector: is empty, this means it matches all the pods. Therefore, the policy applies to all pods in the default namespace.
    3
    There are no ingress rules specified. This causes incoming traffic to be dropped to all pods.
  2. Apply the policy by entering the following command:

    $ oc apply -f deny-by-default.yaml

    Example output

    networkpolicy.networking.k8s.io/deny-by-default created

6.3.2.4. Creating a network policy to allow traffic from external clients

With the deny-by-default policy in place you can proceed to configure a policy that allows traffic from external clients to a pod with the label app=web.

Note

If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster.

Follow this procedure to configure a policy that allows external service from the public Internet directly or by using a Load Balancer to access the pod. Traffic is only allowed to a pod with the label app=web.

Prerequisites

  • Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin, with mode: NetworkPolicy set.
  • You installed the OpenShift CLI (oc).
  • You are logged in to the cluster with a user with admin privileges.
  • You are working in the namespace that the network policy applies to.

Procedure

  1. Create a policy that allows traffic from the public Internet directly or by using a load balancer to access the pod. Save the YAML in the web-allow-external.yaml file:

    kind: NetworkPolicy
    apiVersion: networking.k8s.io/v1
    metadata:
      name: web-allow-external
      namespace: default
    spec:
      policyTypes:
      - Ingress
      podSelector:
        matchLabels:
          app: web
      ingress:
        - {}
  2. Apply the policy by entering the following command:

    $ oc apply -f web-allow-external.yaml

    Example output

    networkpolicy.networking.k8s.io/web-allow-external created

    This policy allows traffic from all resources, including external traffic as illustrated in the following diagram:

Allow traffic from external clients
6.3.2.5. Creating a network policy allowing traffic to an application from all namespaces
Note

If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster.

Follow this procedure to configure a policy that allows traffic from all pods in all namespaces to a particular application.

Prerequisites

  • Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin, with mode: NetworkPolicy set.
  • You installed the OpenShift CLI (oc).
  • You are logged in to the cluster with a user with admin privileges.
  • You are working in the namespace that the network policy applies to.

Procedure

  1. Create a policy that allows traffic from all pods in all namespaces to a particular application. Save the YAML in the web-allow-all-namespaces.yaml file:

    kind: NetworkPolicy
    apiVersion: networking.k8s.io/v1
    metadata:
      name: web-allow-all-namespaces
      namespace: default
    spec:
      podSelector:
        matchLabels:
          app: web 1
      policyTypes:
      - Ingress
      ingress:
      - from:
        - namespaceSelector: {} 2
    1
    Applies the policy only to app:web pods in default namespace.
    2
    Selects all pods in all namespaces.
    Note

    By default, if you omit specifying a namespaceSelector it does not select any namespaces, which means the policy allows traffic only from the namespace the network policy is deployed to.

  2. Apply the policy by entering the following command:

    $ oc apply -f web-allow-all-namespaces.yaml

    Example output

    networkpolicy.networking.k8s.io/web-allow-all-namespaces created

Verification

  1. Start a web service in the default namespace by entering the following command:

    $ oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80
  2. Run the following command to deploy an alpine image in the secondary namespace and to start a shell:

    $ oc run test-$RANDOM --namespace=secondary --rm -i -t --image=alpine -- sh
  3. Run the following command in the shell and observe that the request is allowed:

    # wget -qO- --timeout=2 http://web.default

    Expected output

    <!DOCTYPE html>
    <html>
    <head>
    <title>Welcome to nginx!</title>
    <style>
    html { color-scheme: light dark; }
    body { width: 35em; margin: 0 auto;
    font-family: Tahoma, Verdana, Arial, sans-serif; }
    </style>
    </head>
    <body>
    <h1>Welcome to nginx!</h1>
    <p>If you see this page, the nginx web server is successfully installed and
    working. Further configuration is required.</p>
    
    <p>For online documentation and support please refer to
    <a href="http://nginx.org/">nginx.org</a>.<br/>
    Commercial support is available at
    <a href="http://nginx.com/">nginx.com</a>.</p>
    
    <p><em>Thank you for using nginx.</em></p>
    </body>
    </html>

6.3.2.6. Creating a network policy allowing traffic to an application from a namespace
Note

If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster.

Follow this procedure to configure a policy that allows traffic to a pod with the label app=web from a particular namespace. You might want to do this to:

  • Restrict traffic to a production database only to namespaces where production workloads are deployed.
  • Enable monitoring tools deployed to a particular namespace to scrape metrics from the current namespace.

Prerequisites

  • Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin, with mode: NetworkPolicy set.
  • You installed the OpenShift CLI (oc).
  • You are logged in to the cluster with a user with admin privileges.
  • You are working in the namespace that the network policy applies to.

Procedure

  1. Create a policy that allows traffic from all pods in a particular namespaces with a label purpose=production. Save the YAML in the web-allow-prod.yaml file:

    kind: NetworkPolicy
    apiVersion: networking.k8s.io/v1
    metadata:
      name: web-allow-prod
      namespace: default
    spec:
      podSelector:
        matchLabels:
          app: web 1
      policyTypes:
      - Ingress
      ingress:
      - from:
        - namespaceSelector:
            matchLabels:
              purpose: production 2
    1
    Applies the policy only to app:web pods in the default namespace.
    2
    Restricts traffic to only pods in namespaces that have the label purpose=production.
  2. Apply the policy by entering the following command:

    $ oc apply -f web-allow-prod.yaml

    Example output

    networkpolicy.networking.k8s.io/web-allow-prod created

Verification

  1. Start a web service in the default namespace by entering the following command:

    $ oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80
  2. Run the following command to create the prod namespace:

    $ oc create namespace prod
  3. Run the following command to label the prod namespace:

    $ oc label namespace/prod purpose=production
  4. Run the following command to create the dev namespace:

    $ oc create namespace dev
  5. Run the following command to label the dev namespace:

    $ oc label namespace/dev purpose=testing
  6. Run the following command to deploy an alpine image in the dev namespace and to start a shell:

    $ oc run test-$RANDOM --namespace=dev --rm -i -t --image=alpine -- sh
  7. Run the following command in the shell and observe that the request is blocked:

    # wget -qO- --timeout=2 http://web.default

    Expected output

    wget: download timed out

  8. Run the following command to deploy an alpine image in the prod namespace and start a shell:

    $ oc run test-$RANDOM --namespace=prod --rm -i -t --image=alpine -- sh
  9. Run the following command in the shell and observe that the request is allowed:

    # wget -qO- --timeout=2 http://web.default

    Expected output

    <!DOCTYPE html>
    <html>
    <head>
    <title>Welcome to nginx!</title>
    <style>
    html { color-scheme: light dark; }
    body { width: 35em; margin: 0 auto;
    font-family: Tahoma, Verdana, Arial, sans-serif; }
    </style>
    </head>
    <body>
    <h1>Welcome to nginx!</h1>
    <p>If you see this page, the nginx web server is successfully installed and
    working. Further configuration is required.</p>
    
    <p>For online documentation and support please refer to
    <a href="http://nginx.org/">nginx.org</a>.<br/>
    Commercial support is available at
    <a href="http://nginx.com/">nginx.com</a>.</p>
    
    <p><em>Thank you for using nginx.</em></p>
    </body>
    </html>

6.3.2.7. Additional resources

6.3.3. Viewing a network policy

As a user with the admin role, you can view a network policy for a namespace.

6.3.3.1. Example NetworkPolicy object

The following annotates an example NetworkPolicy object:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-27107 1
spec:
  podSelector: 2
    matchLabels:
      app: mongodb
  ingress:
  - from:
    - podSelector: 3
        matchLabels:
          app: app
    ports: 4
    - protocol: TCP
      port: 27017
1
The name of the NetworkPolicy object.
2
A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object.
3
A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy.
4
A list of one or more destination ports on which to accept traffic.
6.3.3.2. Viewing network policies using the CLI

You can examine the network policies in a namespace.

Note

If you log in with a user with the cluster-admin role, then you can view any network policy in the cluster.

Prerequisites

  • You installed the OpenShift CLI (oc).
  • You are logged in to the cluster with a user with admin privileges.
  • You are working in the namespace where the network policy exists.

Procedure

  • List network policies in a namespace:

    • To view network policy objects defined in a namespace, enter the following command:

      $ oc get networkpolicy
    • Optional: To examine a specific network policy, enter the following command:

      $ oc describe networkpolicy <policy_name> -n <namespace>

      where:

      <policy_name>
      Specifies the name of the network policy to inspect.
      <namespace>
      Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.

      For example:

      $ oc describe networkpolicy allow-same-namespace

      Output for oc describe command

      Name:         allow-same-namespace
      Namespace:    ns1
      Created on:   2021-05-24 22:28:56 -0400 EDT
      Labels:       <none>
      Annotations:  <none>
      Spec:
        PodSelector:     <none> (Allowing the specific traffic to all pods in this namespace)
        Allowing ingress traffic:
          To Port: <any> (traffic allowed to all ports)
          From:
            PodSelector: <none>
        Not affecting egress traffic
        Policy Types: Ingress

Note

If you log in to the web console with cluster-admin privileges, you have a choice of viewing a network policy in any namespace in the cluster directly in YAML or from a form in the web console.

6.3.4. Editing a network policy

As a user with the admin role, you can edit an existing network policy for a namespace.

6.3.4.1. Editing a network policy

You can edit a network policy in a namespace.

Note

If you log in with a user with the cluster-admin role, then you can edit a network policy in any namespace in the cluster.

Prerequisites

  • Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin, with mode: NetworkPolicy set.
  • You installed the OpenShift CLI (oc).
  • You are logged in to the cluster with a user with admin privileges.
  • You are working in the namespace where the network policy exists.

Procedure

  1. Optional: To list the network policy objects in a namespace, enter the following command:

    $ oc get networkpolicy

    where:

    <namespace>
    Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
  2. Edit the network policy object.

    • If you saved the network policy definition in a file, edit the file and make any necessary changes, and then enter the following command.

      $ oc apply -n <namespace> -f <policy_file>.yaml

      where:

      <namespace>
      Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
      <policy_file>
      Specifies the name of the file containing the network policy.
    • If you need to update the network policy object directly, enter the following command:

      $ oc edit networkpolicy <policy_name> -n <namespace>

      where:

      <policy_name>
      Specifies the name of the network policy.
      <namespace>
      Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
  3. Confirm that the network policy object is updated.

    $ oc describe networkpolicy <policy_name> -n <namespace>

    where:

    <policy_name>
    Specifies the name of the network policy.
    <namespace>
    Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
Note

If you log in to the web console with cluster-admin privileges, you have a choice of editing a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu.

6.3.4.2. Example NetworkPolicy object

The following annotates an example NetworkPolicy object:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-27107 1
spec:
  podSelector: 2
    matchLabels:
      app: mongodb
  ingress:
  - from:
    - podSelector: 3
        matchLabels:
          app: app
    ports: 4
    - protocol: TCP
      port: 27017
1
The name of the NetworkPolicy object.
2
A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object.
3
A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy.
4
A list of one or more destination ports on which to accept traffic.
6.3.4.3. Additional resources

6.3.5. Deleting a network policy

As a user with the admin role, you can delete a network policy from a namespace.

6.3.5.1. Deleting a network policy using the CLI

You can delete a network policy in a namespace.

Note

If you log in with a user with the cluster-admin role, then you can delete any network policy in the cluster.

Prerequisites

  • Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin, with mode: NetworkPolicy set.
  • You installed the OpenShift CLI (oc).
  • You are logged in to the cluster with a user with admin privileges.
  • You are working in the namespace where the network policy exists.

Procedure

  • To delete a network policy object, enter the following command:

    $ oc delete networkpolicy <policy_name> -n <namespace>

    where:

    <policy_name>
    Specifies the name of the network policy.
    <namespace>
    Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.

    Example output

    networkpolicy.networking.k8s.io/default-deny deleted

Note

If you log in to the web console with cluster-admin privileges, you have a choice of deleting a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu.

6.3.6. Defining a default network policy for projects

As a cluster administrator, you can modify the new project template to automatically include network policies when you create a new project. If you do not yet have a customized template for new projects, you must first create one.

6.3.6.1. Modifying the template for new projects

As a cluster administrator, you can modify the default project template so that new projects are created using your custom requirements.

To create your own custom project template:

Prerequisites

  • You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions.

Procedure

  1. Log in as a user with cluster-admin privileges.
  2. Generate the default project template:

    $ oc adm create-bootstrap-project-template -o yaml > template.yaml
  3. Use a text editor to modify the generated template.yaml file by adding objects or modifying existing objects.
  4. The project template must be created in the openshift-config namespace. Load your modified template:

    $ oc create -f template.yaml -n openshift-config
  5. Edit the project configuration resource using the web console or CLI.

    • Using the web console:

      1. Navigate to the AdministrationCluster Settings page.
      2. Click Configuration to view all configuration resources.
      3. Find the entry for Project and click Edit YAML.
    • Using the CLI:

      1. Edit the project.config.openshift.io/cluster resource:

        $ oc edit project.config.openshift.io/cluster
  6. Update the spec section to include the projectRequestTemplate and name parameters, and set the name of your uploaded project template. The default name is project-request.

    Project configuration resource with custom project template

    apiVersion: config.openshift.io/v1
    kind: Project
    metadata:
    # ...
    spec:
      projectRequestTemplate:
        name: <template_name>
    # ...

  7. After you save your changes, create a new project to verify that your changes were successfully applied.
6.3.6.2. Adding network policies to the new project template

As a cluster administrator, you can add network policies to the default template for new projects. OpenShift Container Platform will automatically create all the NetworkPolicy objects specified in the template in the project.

Prerequisites

  • Your cluster uses a default CNI network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes.
  • You installed the OpenShift CLI (oc).
  • You must log in to the cluster with a user with cluster-admin privileges.
  • You must have created a custom default project template for new projects.

Procedure

  1. Edit the default template for a new project by running the following command:

    $ oc edit template <project_template> -n openshift-config

    Replace <project_template> with the name of the default template that you configured for your cluster. The default template name is project-request.

  2. In the template, add each NetworkPolicy object as an element to the objects parameter. The objects parameter accepts a collection of one or more objects.

    In the following example, the objects parameter collection includes several NetworkPolicy objects.

    objects:
    - apiVersion: networking.k8s.io/v1
      kind: NetworkPolicy
      metadata:
        name: allow-from-same-namespace
      spec:
        podSelector: {}
        ingress:
        - from:
          - podSelector: {}
    - apiVersion: networking.k8s.io/v1
      kind: NetworkPolicy
      metadata:
        name: allow-from-openshift-ingress
      spec:
        ingress:
        - from:
          - namespaceSelector:
              matchLabels:
                network.openshift.io/policy-group: ingress
        podSelector: {}
        policyTypes:
        - Ingress
    - apiVersion: networking.k8s.io/v1
      kind: NetworkPolicy
      metadata:
        name: allow-from-kube-apiserver-operator
      spec:
        ingress:
        - from:
          - namespaceSelector:
              matchLabels:
                kubernetes.io/metadata.name: openshift-kube-apiserver-operator
            podSelector:
              matchLabels:
                app: kube-apiserver-operator
        policyTypes:
        - Ingress
    ...
  3. Optional: Create a new project to confirm that your network policy objects are created successfully by running the following commands:

    1. Create a new project:

      $ oc new-project <project> 1
      1
      Replace <project> with the name for the project you are creating.
    2. Confirm that the network policy objects in the new project template exist in the new project:

      $ oc get networkpolicy
      NAME                           POD-SELECTOR   AGE
      allow-from-openshift-ingress   <none>         7s
      allow-from-same-namespace      <none>         7s

6.3.7. Configuring multitenant isolation with network policy

As a cluster administrator, you can configure your network policies to provide multitenant network isolation.

Note

Configuring network policies as described in this section provides network isolation similar to the multitenant mode of OpenShift SDN in previous versions of OpenShift Container Platform.

6.3.7.1. Configuring multitenant isolation by using network policy

You can configure your project to isolate it from pods and services in other project namespaces.

Prerequisites

  • Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin, with mode: NetworkPolicy set.
  • You installed the OpenShift CLI (oc).
  • You are logged in to the cluster with a user with admin privileges.

Procedure

  1. Create the following NetworkPolicy objects:

    1. A policy named allow-from-openshift-ingress.

      $ cat << EOF| oc create -f -
      apiVersion: networking.k8s.io/v1
      kind: NetworkPolicy
      metadata:
        name: allow-from-openshift-ingress
      spec:
        ingress:
        - from:
          - namespaceSelector:
              matchLabels:
                policy-group.network.openshift.io/ingress: ""
        podSelector: {}
        policyTypes:
        - Ingress
      EOF
      Note

      policy-group.network.openshift.io/ingress: "" is the preferred namespace selector label for OVN-Kubernetes.

    2. A policy named allow-from-openshift-monitoring:

      $ cat << EOF| oc create -f -
      apiVersion: networking.k8s.io/v1
      kind: NetworkPolicy
      metadata:
        name: allow-from-openshift-monitoring
      spec:
        ingress:
        - from:
          - namespaceSelector:
              matchLabels:
                network.openshift.io/policy-group: monitoring
        podSelector: {}
        policyTypes:
        - Ingress
      EOF
    3. A policy named allow-same-namespace:

      $ cat << EOF| oc create -f -
      kind: NetworkPolicy
      apiVersion: networking.k8s.io/v1
      metadata:
        name: allow-same-namespace
      spec:
        podSelector:
        ingress:
        - from:
          - podSelector: {}
      EOF
    4. A policy named allow-from-kube-apiserver-operator:

      $ cat << EOF| oc create -f -
      apiVersion: networking.k8s.io/v1
      kind: NetworkPolicy
      metadata:
        name: allow-from-kube-apiserver-operator
      spec:
        ingress:
        - from:
          - namespaceSelector:
              matchLabels:
                kubernetes.io/metadata.name: openshift-kube-apiserver-operator
            podSelector:
              matchLabels:
                app: kube-apiserver-operator
        policyTypes:
        - Ingress
      EOF

      For more details, see New kube-apiserver-operator webhook controller validating health of webhook.

  2. Optional: To confirm that the network policies exist in your current project, enter the following command:

    $ oc describe networkpolicy

    Example output

    Name:         allow-from-openshift-ingress
    Namespace:    example1
    Created on:   2020-06-09 00:28:17 -0400 EDT
    Labels:       <none>
    Annotations:  <none>
    Spec:
      PodSelector:     <none> (Allowing the specific traffic to all pods in this namespace)
      Allowing ingress traffic:
        To Port: <any> (traffic allowed to all ports)
        From:
          NamespaceSelector: network.openshift.io/policy-group: ingress
      Not affecting egress traffic
      Policy Types: Ingress
    
    
    Name:         allow-from-openshift-monitoring
    Namespace:    example1
    Created on:   2020-06-09 00:29:57 -0400 EDT
    Labels:       <none>
    Annotations:  <none>
    Spec:
      PodSelector:     <none> (Allowing the specific traffic to all pods in this namespace)
      Allowing ingress traffic:
        To Port: <any> (traffic allowed to all ports)
        From:
          NamespaceSelector: network.openshift.io/policy-group: monitoring
      Not affecting egress traffic
      Policy Types: Ingress

6.3.7.2. Next steps

6.4. Audit logging for network security

The OVN-Kubernetes network plugin uses Open Virtual Network (OVN) access control lists (ACLs) to manage AdminNetworkPolicy, BaselineAdminNetworkPolicy, NetworkPolicy, and EgressFirewall objects. Audit logging exposes allow and deny ACL events for NetworkPolicy, EgressFirewall and BaselineAdminNetworkPolicy custom resources (CR). Logging also exposes allow, deny, and pass ACL events for AdminNetworkPolicy (ANP) CR.

Note

Audit logging is available for only the OVN-Kubernetes network plugin.

6.4.1. Audit configuration

The configuration for audit logging is specified as part of the OVN-Kubernetes cluster network provider configuration. The following YAML illustrates the default values for the audit logging:

Audit logging configuration

apiVersion: operator.openshift.io/v1
kind: Network
metadata:
  name: cluster
spec:
  defaultNetwork:
    ovnKubernetesConfig:
      policyAuditConfig:
        destination: "null"
        maxFileSize: 50
        rateLimit: 20
        syslogFacility: local0

The following table describes the configuration fields for audit logging.

Table 6.1. policyAuditConfig object
FieldTypeDescription

rateLimit

integer

The maximum number of messages to generate every second per node. The default value is 20 messages per second.

maxFileSize

integer

The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB.

maxLogFiles

integer

The maximum number of log files that are retained.

destination

string

One of the following additional audit log targets:

libc
The libc syslog() function of the journald process on the host.
udp:<host>:<port>
A syslog server. Replace <host>:<port> with the host and port of the syslog server.
unix:<file>
A Unix Domain Socket file specified by <file>.
null
Do not send the audit logs to any additional target.

syslogFacility

string

The syslog facility, such as kern, as defined by RFC5424. The default value is local0.

6.4.2. Audit logging

You can configure the destination for audit logs, such as a syslog server or a UNIX domain socket. Regardless of any additional configuration, an audit log is always saved to /var/log/ovn/acl-audit-log.log on each OVN-Kubernetes pod in the cluster.

You can enable audit logging for each namespace by annotating each namespace configuration with a k8s.ovn.org/acl-logging section. In the k8s.ovn.org/acl-logging section, you must specify allow, deny, or both values to enable audit logging for a namespace.

Note

A network policy does not support setting the Pass action set as a rule.

The ACL-logging implementation logs access control list (ACL) events for a network. You can view these logs to analyze any potential security issues.

Example namespace annotation

kind: Namespace
apiVersion: v1
metadata:
  name: example1
  annotations:
    k8s.ovn.org/acl-logging: |-
      {
        "deny": "info",
        "allow": "info"
      }

To view the default ACL logging configuration values, see the policyAuditConfig object in the cluster-network-03-config.yml file. If required, you can change the ACL logging configuration values for log file parameters in this file.

The logging message format is compatible with syslog as defined by RFC5424. The syslog facility is configurable and defaults to local0. The following example shows key parameters and their values outputted in a log message:

Example logging message that outputs parameters and their values

<timestamp>|<message_serial>|acl_log(ovn_pinctrl0)|<severity>|name="<acl_name>", verdict="<verdict>", severity="<severity>", direction="<direction>": <flow>

Where:

  • <timestamp> states the time and date for the creation of a log message.
  • <message_serial> lists the serial number for a log message.
  • acl_log(ovn_pinctrl0) is a literal string that prints the location of the log message in the OVN-Kubernetes plugin.
  • <severity> sets the severity level for a log message. If you enable audit logging that supports allow and deny tasks then two severity levels show in the log message output.
  • <name> states the name of the ACL-logging implementation in the OVN Network Bridging Database (nbdb) that was created by the network policy.
  • <verdict> can be either allow or drop.
  • <direction> can be either to-lport or from-lport to indicate that the policy was applied to traffic going to or away from a pod.
  • <flow> shows packet information in a format equivalent to the OpenFlow protocol. This parameter comprises Open vSwitch (OVS) fields.

The following example shows OVS fields that the flow parameter uses to extract packet information from system memory:

Example of OVS fields used by the flow parameter to extract packet information

<proto>,vlan_tci=0x0000,dl_src=<src_mac>,dl_dst=<source_mac>,nw_src=<source_ip>,nw_dst=<target_ip>,nw_tos=<tos_dscp>,nw_ecn=<tos_ecn>,nw_ttl=<ip_ttl>,nw_frag=<fragment>,tp_src=<tcp_src_port>,tp_dst=<tcp_dst_port>,tcp_flags=<tcp_flags>

Where:

  • <proto> states the protocol. Valid values are tcp and udp.
  • vlan_tci=0x0000 states the VLAN header as 0 because a VLAN ID is not set for internal pod network traffic.
  • <src_mac> specifies the source for the Media Access Control (MAC) address.
  • <source_mac> specifies the destination for the MAC address.
  • <source_ip> lists the source IP address
  • <target_ip> lists the target IP address.
  • <tos_dscp> states Differentiated Services Code Point (DSCP) values to classify and prioritize certain network traffic over other traffic.
  • <tos_ecn> states Explicit Congestion Notification (ECN) values that indicate any congested traffic in your network.
  • <ip_ttl> states the Time To Live (TTP) information for an packet.
  • <fragment> specifies what type of IP fragments or IP non-fragments to match.
  • <tcp_src_port> shows the source for the port for TCP and UDP protocols.
  • <tcp_dst_port> lists the destination port for TCP and UDP protocols.
  • <tcp_flags> supports numerous flags such as SYN, ACK, PSH and so on. If you need to set multiple values then each value is separated by a vertical bar (|). The UDP protocol does not support this parameter.
Note

For more information about the previous field descriptions, go to the OVS manual page for ovs-fields.

Example ACL deny log entry for a network policy

2023-11-02T16:28:54.139Z|00004|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn
2023-11-02T16:28:55.187Z|00005|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn
2023-11-02T16:28:57.235Z|00006|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn

The following table describes namespace annotation values:

Table 6.2. Audit logging namespace annotation for k8s.ovn.org/acl-logging
FieldDescription

deny

Blocks namespace access to any traffic that matches an ACL rule with the deny action. The field supports alert, warning, notice, info, or debug values.

allow

Permits namespace access to any traffic that matches an ACL rule with the allow action. The field supports alert, warning, notice, info, or debug values.

pass

A pass action applies to an admin network policy’s ACL rule. A pass action allows either the network policy in the namespace or the baseline admin network policy rule to evaluate all incoming and outgoing traffic. A network policy does not support a pass action.

Additional resources

6.4.3. AdminNetworkPolicy audit logging

Audit logging is enabled per AdminNetworkPolicy CR by annotating an ANP policy with the k8s.ovn.org/acl-logging key such as in the following example:

Example 6.19. Example of annotation for AdminNetworkPolicy CR

apiVersion: policy.networking.k8s.io/v1alpha1
kind: AdminNetworkPolicy
metadata:
  annotations:
    k8s.ovn.org/acl-logging: '{ "deny": "alert", "allow": "alert", "pass" : "warning" }'
  name: anp-tenant-log
spec:
  priority: 5
  subject:
    namespaces:
      matchLabels:
        tenant: backend-storage # Selects all pods owned by storage tenant.
  ingress:
    - name: "allow-all-ingress-product-development-and-customer" # Product development and customer tenant ingress to backend storage.
      action: "Allow"
      from:
      - pods:
          namespaceSelector:
            matchExpressions:
            - key: tenant
              operator: In
              values:
              - product-development
              - customer
          podSelector: {}
    - name: "pass-all-ingress-product-security"
      action: "Pass"
      from:
      - namespaces:
          matchLabels:
              tenant: product-security
    - name: "deny-all-ingress" # Ingress to backend from all other pods in the cluster.
      action: "Deny"
      from:
      - namespaces: {}
  egress:
    - name: "allow-all-egress-product-development"
      action: "Allow"
      to:
      - pods:
          namespaceSelector:
            matchLabels:
              tenant: product-development
          podSelector: {}
    - name: "pass-egress-product-security"
      action: "Pass"
      to:
      - namespaces:
           matchLabels:
             tenant: product-security
    - name: "deny-all-egress" # Egress from backend denied to all other pods.
      action: "Deny"
      to:
      - namespaces: {}

Logs are generated whenever a specific OVN ACL is hit and meets the action criteria set in your logging annotation. For example, an event in which any of the namespaces with the label tenant: product-development accesses the namespaces with the label tenant: backend-storage, a log is generated.

Note

ACL logging is limited to 60 characters. If your ANP name field is long, the rest of the log will be truncated.

The following is a direction index for the examples log entries that follow:

DirectionRule

Ingress

Rule0
Allow from tenant product-development and customer to tenant backend-storage; Ingress0: Allow
Rule1
Pass from product-security`to tenant `backend-storage; Ingress1: Pass
Rule2
Deny ingress from all pods; Ingress2: Deny

Egress

Rule0
Allow to product-development; Egress0: Allow
Rule1
Pass to product-security; Egress1: Pass
Rule2
Deny egress to all other pods; Egress2: Deny

Example 6.20. Example ACL log entry for Allow action of the AdminNetworkPolicy named anp-tenant-log with Ingress:0 and Egress:0

2024-06-10T16:27:45.194Z|00052|acl_log(ovn_pinctrl0)|INFO|name="ANP:anp-tenant-log:Ingress:0", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:1a,dl_dst=0a:58:0a:80:02:19,nw_src=10.128.2.26,nw_dst=10.128.2.25,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=57814,tp_dst=8080,tcp_flags=syn
2024-06-10T16:28:23.130Z|00059|acl_log(ovn_pinctrl0)|INFO|name="ANP:anp-tenant-log:Ingress:0", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:18,dl_dst=0a:58:0a:80:02:19,nw_src=10.128.2.24,nw_dst=10.128.2.25,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=38620,tp_dst=8080,tcp_flags=ack
2024-06-10T16:28:38.293Z|00069|acl_log(ovn_pinctrl0)|INFO|name="ANP:anp-tenant-log:Egress:0", verdict=allow, severity=alert, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:19,dl_dst=0a:58:0a:80:02:1a,nw_src=10.128.2.25,nw_dst=10.128.2.26,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=47566,tp_dst=8080,tcp_flags=fin|ack=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=55704,tp_dst=8080,tcp_flags=ack

Example 6.21. Example ACL log entry for Pass action of the AdminNetworkPolicy named anp-tenant-log with Ingress:1 and Egress:1

2024-06-10T16:33:12.019Z|00075|acl_log(ovn_pinctrl0)|INFO|name="ANP:anp-tenant-log:Ingress:1", verdict=pass, severity=warning, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:1b,dl_dst=0a:58:0a:80:02:19,nw_src=10.128.2.27,nw_dst=10.128.2.25,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=37394,tp_dst=8080,tcp_flags=ack
2024-06-10T16:35:04.209Z|00081|acl_log(ovn_pinctrl0)|INFO|name="ANP:anp-tenant-log:Egress:1", verdict=pass, severity=warning, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:19,dl_dst=0a:58:0a:80:02:1b,nw_src=10.128.2.25,nw_dst=10.128.2.27,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=34018,tp_dst=8080,tcp_flags=ack

Example 6.22. Example ACL log entry for Deny action of the AdminNetworkPolicy named anp-tenant-log with Egress:2 and Ingress2

2024-06-10T16:43:05.287Z|00087|acl_log(ovn_pinctrl0)|INFO|name="ANP:anp-tenant-log:Egress:2", verdict=drop, severity=alert, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:19,dl_dst=0a:58:0a:80:02:18,nw_src=10.128.2.25,nw_dst=10.128.2.24,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=51598,tp_dst=8080,tcp_flags=syn
2024-06-10T16:44:43.591Z|00090|acl_log(ovn_pinctrl0)|INFO|name="ANP:anp-tenant-log:Ingress:2", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:1c,dl_dst=0a:58:0a:80:02:19,nw_src=10.128.2.28,nw_dst=10.128.2.25,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=33774,tp_dst=8080,tcp_flags=syn

The following table describes ANP annotation:

Table 6.3. Audit logging AdminNetworkPolicy annotation
AnnotationValue

k8s.ovn.org/acl-logging

You must specify at least one of Allow, Deny, or Pass to enable audit logging for a namespace.

Deny
Optional: Specify alert, warning, notice, info, or debug.
Allow
Optional: Specify alert, warning, notice, info, or debug.
Pass
Optional: Specify alert, warning, notice, info, or debug.

6.4.4. BaselineAdminNetworkPolicy audit logging

Audit logging is enabled in the BaselineAdminNetworkPolicy CR by annotating an BANP policy with the k8s.ovn.org/acl-logging key such as in the following example:

Example 6.23. Example of annotation for BaselineAdminNetworkPolicy CR

apiVersion: policy.networking.k8s.io/v1alpha1
kind: BaselineAdminNetworkPolicy
metadata:
  annotations:
    k8s.ovn.org/acl-logging: '{ "deny": "alert", "allow": "alert"}'
  name: default
spec:
  subject:
    namespaces:
      matchLabels:
          tenant: workloads # Selects all workload pods in the cluster.
  ingress:
  - name: "default-allow-dns" # This rule allows ingress from dns tenant to all workloads.
    action: "Allow"
    from:
    - namespaces:
          matchLabels:
            tenant: dns
  - name: "default-deny-dns" # This rule denies all ingress from all pods to workloads.
    action: "Deny"
    from:
    - namespaces: {} # Use the empty selector with caution because it also selects OpenShift namespaces as well.
  egress:
  - name: "default-deny-dns" # This rule denies all egress from workloads. It will be applied when no ANP or network policy matches.
    action: "Deny"
    to:
    - namespaces: {} # Use the empty selector with caution because it also selects OpenShift namespaces as well.

In the example, an event in which any of the namespaces with the label tenant: dns accesses the namespaces with the label tenant: workloads, a log is generated.

The following is a direction index for the examples log entries that follow:

DirectionRule

Ingress

Rule0
Allow from tenant dns to tenant workloads; Ingress0: Allow
Rule1
Deny to tenant workloads from all pods; Ingress1: Deny

Egress

Rule0
Deny to all pods; Egress0: Deny

Example 6.24. Example ACL allow log entry for Allow action of default BANP with Ingress:0

2024-06-10T18:11:58.263Z|00022|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Ingress:0", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=syn
2024-06-10T18:11:58.264Z|00023|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Ingress:0", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=psh|ack
2024-06-10T18:11:58.264Z|00024|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Ingress:0", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=ack
2024-06-10T18:11:58.264Z|00025|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Ingress:0", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=ack
2024-06-10T18:11:58.264Z|00026|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Ingress:0", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=fin|ack
2024-06-10T18:11:58.264Z|00027|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Ingress:0", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=ack

Example 6.25. Example ACL allow log entry for Allow action of default BANP with Egress:0 and Ingress:1

2024-06-10T18:09:57.774Z|00016|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Egress:0", verdict=drop, severity=alert, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:56,dl_dst=0a:58:0a:82:02:57,nw_src=10.130.2.86,nw_dst=10.130.2.87,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=45614,tp_dst=8080,tcp_flags=syn
2024-06-10T18:09:58.809Z|00017|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Egress:0", verdict=drop, severity=alert, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:56,dl_dst=0a:58:0a:82:02:57,nw_src=10.130.2.86,nw_dst=10.130.2.87,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=45614,tp_dst=8080,tcp_flags=syn
2024-06-10T18:10:00.857Z|00018|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Egress:0", verdict=drop, severity=alert, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:56,dl_dst=0a:58:0a:82:02:57,nw_src=10.130.2.86,nw_dst=10.130.2.87,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=45614,tp_dst=8080,tcp_flags=syn
2024-06-10T18:10:25.414Z|00019|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Ingress:1", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:58,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.88,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=40630,tp_dst=8080,tcp_flags=syn
2024-06-10T18:10:26.457Z|00020|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Ingress:1", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:58,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.88,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=40630,tp_dst=8080,tcp_flags=syn
2024-06-10T18:10:28.505Z|00021|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Ingress:1", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:58,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.88,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=40630,tp_dst=8080,tcp_flags=syn

The following table describes BANP annotation:

Table 6.4. Audit logging BaselineAdminNetworkPolicy annotation
AnnotationValue

k8s.ovn.org/acl-logging

You must specify at least one of Allow or Deny to enable audit logging for a namespace.

Deny
Optional: Specify alert, warning, notice, info, or debug.
Allow
Optional: Specify alert, warning, notice, info, or debug.

6.4.5. Configuring egress firewall and network policy auditing for a cluster

As a cluster administrator, you can customize audit logging for your cluster.

Prerequisites

  • Install the OpenShift CLI (oc).
  • Log in to the cluster with a user with cluster-admin privileges.

Procedure

  • To customize the audit logging configuration, enter the following command:

    $ oc edit network.operator.openshift.io/cluster
    Tip

    You can alternatively customize and apply the following YAML to configure audit logging:

    apiVersion: operator.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
      defaultNetwork:
        ovnKubernetesConfig:
          policyAuditConfig:
            destination: "null"
            maxFileSize: 50
            rateLimit: 20
            syslogFacility: local0

Verification

  1. To create a namespace with network policies complete the following steps:

    1. Create a namespace for verification:

      $ cat <<EOF| oc create -f -
      kind: Namespace
      apiVersion: v1
      metadata:
        name: verify-audit-logging
        annotations:
          k8s.ovn.org/acl-logging: '{ "deny": "alert", "allow": "alert" }'
      EOF

      Example output

      namespace/verify-audit-logging created

    2. Create network policies for the namespace:

      $ cat <<EOF| oc create -n verify-audit-logging -f -
      apiVersion: networking.k8s.io/v1
      kind: NetworkPolicy
      metadata:
        name: deny-all
      spec:
        podSelector:
          matchLabels:
        policyTypes:
        - Ingress
        - Egress
      ---
      apiVersion: networking.k8s.io/v1
      kind: NetworkPolicy
      metadata:
        name: allow-from-same-namespace
        namespace: verify-audit-logging
      spec:
        podSelector: {}
        policyTypes:
         - Ingress
         - Egress
        ingress:
          - from:
              - podSelector: {}
        egress:
          - to:
             - namespaceSelector:
                matchLabels:
                  kubernetes.io/metadata.name: verify-audit-logging
      EOF

      Example output

      networkpolicy.networking.k8s.io/deny-all created
      networkpolicy.networking.k8s.io/allow-from-same-namespace created

  2. Create a pod for source traffic in the default namespace:

    $ cat <<EOF| oc create -n default -f -
    apiVersion: v1
    kind: Pod
    metadata:
      name: client
    spec:
      containers:
        - name: client
          image: registry.access.redhat.com/rhel7/rhel-tools
          command: ["/bin/sh", "-c"]
          args:
            ["sleep inf"]
    EOF
  3. Create two pods in the verify-audit-logging namespace:

    $ for name in client server; do
    cat <<EOF| oc create -n verify-audit-logging -f -
    apiVersion: v1
    kind: Pod
    metadata:
      name: ${name}
    spec:
      containers:
        - name: ${name}
          image: registry.access.redhat.com/rhel7/rhel-tools
          command: ["/bin/sh", "-c"]
          args:
            ["sleep inf"]
    EOF
    done

    Example output

    pod/client created
    pod/server created

  4. To generate traffic and produce network policy audit log entries, complete the following steps:

    1. Obtain the IP address for pod named server in the verify-audit-logging namespace:

      $ POD_IP=$(oc get pods server -n verify-audit-logging -o jsonpath='{.status.podIP}')
    2. Ping the IP address from the previous command from the pod named client in the default namespace and confirm that all packets are dropped:

      $ oc exec -it client -n default -- /bin/ping -c 2 $POD_IP

      Example output

      PING 10.128.2.55 (10.128.2.55) 56(84) bytes of data.
      
      --- 10.128.2.55 ping statistics ---
      2 packets transmitted, 0 received, 100% packet loss, time 2041ms

    3. Ping the IP address saved in the POD_IP shell environment variable from the pod named client in the verify-audit-logging namespace and confirm that all packets are allowed:

      $ oc exec -it client -n verify-audit-logging -- /bin/ping -c 2 $POD_IP

      Example output

      PING 10.128.0.86 (10.128.0.86) 56(84) bytes of data.
      64 bytes from 10.128.0.86: icmp_seq=1 ttl=64 time=2.21 ms
      64 bytes from 10.128.0.86: icmp_seq=2 ttl=64 time=0.440 ms
      
      --- 10.128.0.86 ping statistics ---
      2 packets transmitted, 2 received, 0% packet loss, time 1001ms
      rtt min/avg/max/mdev = 0.440/1.329/2.219/0.890 ms

  5. Display the latest entries in the network policy audit log:

    $ for pod in $(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print $1 }') ; do
        oc exec -it $pod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log
      done

    Example output

    2023-11-02T16:28:54.139Z|00004|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn
    2023-11-02T16:28:55.187Z|00005|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn
    2023-11-02T16:28:57.235Z|00006|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn
    2023-11-02T16:49:57.909Z|00028|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Egress:0", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0
    2023-11-02T16:49:57.909Z|00029|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Ingress:0", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0
    2023-11-02T16:49:58.932Z|00030|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Egress:0", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0
    2023-11-02T16:49:58.932Z|00031|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Ingress:0", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0

6.4.6. Enabling egress firewall and network policy audit logging for a namespace

As a cluster administrator, you can enable audit logging for a namespace.

Prerequisites

  • Install the OpenShift CLI (oc).
  • Log in to the cluster with a user with cluster-admin privileges.

Procedure

  • To enable audit logging for a namespace, enter the following command:

    $ oc annotate namespace <namespace> \
      k8s.ovn.org/acl-logging='{ "deny": "alert", "allow": "notice" }'

    where:

    <namespace>
    Specifies the name of the namespace.
    Tip

    You can alternatively apply the following YAML to enable audit logging:

    kind: Namespace
    apiVersion: v1
    metadata:
      name: <namespace>
      annotations:
        k8s.ovn.org/acl-logging: |-
          {
            "deny": "alert",
            "allow": "notice"
          }

    Example output

    namespace/verify-audit-logging annotated

Verification

  • Display the latest entries in the audit log:

    $ for pod in $(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print $1 }') ; do
        oc exec -it $pod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log
      done

    Example output

    2023-11-02T16:49:57.909Z|00028|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Egress:0", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0
    2023-11-02T16:49:57.909Z|00029|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Ingress:0", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0
    2023-11-02T16:49:58.932Z|00030|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Egress:0", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0
    2023-11-02T16:49:58.932Z|00031|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Ingress:0", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0

6.4.7. Disabling egress firewall and network policy audit logging for a namespace

As a cluster administrator, you can disable audit logging for a namespace.

Prerequisites

  • Install the OpenShift CLI (oc).
  • Log in to the cluster with a user with cluster-admin privileges.

Procedure

  • To disable audit logging for a namespace, enter the following command:

    $ oc annotate --overwrite namespace <namespace> k8s.ovn.org/acl-logging-

    where:

    <namespace>
    Specifies the name of the namespace.
    Tip

    You can alternatively apply the following YAML to disable audit logging:

    kind: Namespace
    apiVersion: v1
    metadata:
      name: <namespace>
      annotations:
        k8s.ovn.org/acl-logging: null

    Example output

    namespace/verify-audit-logging annotated

6.4.8. Additional resources

6.5. Ingress Node Firewall Operator in OpenShift Container Platform

The Ingress Node Firewall Operator allows administrators to manage firewall configurations at the node level.

6.5.1. Ingress Node Firewall Operator

The Ingress Node Firewall Operator provides ingress firewall rules at a node level by deploying the daemon set to nodes you specify and manage in the firewall configurations. To deploy the daemon set, you create an IngressNodeFirewallConfig custom resource (CR). The Operator applies the IngressNodeFirewallConfig CR to create ingress node firewall daemon set daemon, which run on all nodes that match the nodeSelector.

You configure rules of the IngressNodeFirewall CR and apply them to clusters using the nodeSelector and setting values to "true".

Important

The Ingress Node Firewall Operator supports only stateless firewall rules.

Network interface controllers (NICs) that do not support native XDP drivers will run at a lower performance.

For OpenShift Container Platform 4.14 or later, you must run Ingress Node Firewall Operator on RHEL 9.0 or later.

6.5.2. Installing the Ingress Node Firewall Operator

As a cluster administrator, you can install the Ingress Node Firewall Operator by using the OpenShift Container Platform CLI or the web console.

6.5.2.1. Installing the Ingress Node Firewall Operator using the CLI

As a cluster administrator, you can install the Operator using the CLI.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have an account with administrator privileges.

Procedure

  1. To create the openshift-ingress-node-firewall namespace, enter the following command:

    $ cat << EOF| oc create -f -
    apiVersion: v1
    kind: Namespace
    metadata:
      labels:
        pod-security.kubernetes.io/enforce: privileged
        pod-security.kubernetes.io/enforce-version: v1.24
      name: openshift-ingress-node-firewall
    EOF
  2. To create an OperatorGroup CR, enter the following command:

    $ cat << EOF| oc create -f -
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: ingress-node-firewall-operators
      namespace: openshift-ingress-node-firewall
    EOF
  3. Subscribe to the Ingress Node Firewall Operator.

    1. To create a Subscription CR for the Ingress Node Firewall Operator, enter the following command:

      $ cat << EOF| oc create -f -
      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: ingress-node-firewall-sub
        namespace: openshift-ingress-node-firewall
      spec:
        name: ingress-node-firewall
        channel: stable
        source: redhat-operators
        sourceNamespace: openshift-marketplace
      EOF
  4. To verify that the Operator is installed, enter the following command:

    $ oc get ip -n openshift-ingress-node-firewall

    Example output

    NAME            CSV                                         APPROVAL    APPROVED
    install-5cvnz   ingress-node-firewall.4.17.0-202211122336   Automatic   true

  5. To verify the version of the Operator, enter the following command:

    $ oc get csv -n openshift-ingress-node-firewall

    Example output

    NAME                                        DISPLAY                          VERSION               REPLACES                                    PHASE
    ingress-node-firewall.4.17.0-202211122336   Ingress Node Firewall Operator   4.17.0-202211122336   ingress-node-firewall.4.17.0-202211102047   Succeeded

6.5.2.2. Installing the Ingress Node Firewall Operator using the web console

As a cluster administrator, you can install the Operator using the web console.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have an account with administrator privileges.

Procedure

  1. Install the Ingress Node Firewall Operator:

    1. In the OpenShift Container Platform web console, click OperatorsOperatorHub.
    2. Select Ingress Node Firewall Operator from the list of available Operators, and then click Install.
    3. On the Install Operator page, under Installed Namespace, select Operator recommended Namespace.
    4. Click Install.
  2. Verify that the Ingress Node Firewall Operator is installed successfully:

    1. Navigate to the OperatorsInstalled Operators page.
    2. Ensure that Ingress Node Firewall Operator is listed in the openshift-ingress-node-firewall project with a Status of InstallSucceeded.

      Note

      During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message.

      If the Operator does not have a Status of InstallSucceeded, troubleshoot using the following steps:

      • Inspect the Operator Subscriptions and Install Plans tabs for any failures or errors under Status.
      • Navigate to the WorkloadsPods page and check the logs for pods in the openshift-ingress-node-firewall project.
      • Check the namespace of the YAML file. If the annotation is missing, you can add the annotation workload.openshift.io/allowed=management to the Operator namespace with the following command:

        $ oc annotate ns/openshift-ingress-node-firewall workload.openshift.io/allowed=management
        Note

        For single-node OpenShift clusters, the openshift-ingress-node-firewall namespace requires the workload.openshift.io/allowed=management annotation.

6.5.3. Deploying Ingress Node Firewall Operator

Prerequisite

  • The Ingress Node Firewall Operator is installed.

Procedure

To deploy the Ingress Node Firewall Operator, create a IngressNodeFirewallConfig custom resource that will deploy the Operator’s daemon set. You can deploy one or multiple IngressNodeFirewall CRDs to nodes by applying firewall rules.

  1. Create the IngressNodeFirewallConfig inside the openshift-ingress-node-firewall namespace named ingressnodefirewallconfig.
  2. Run the following command to deploy Ingress Node Firewall Operator rules:

    $ oc apply -f rule.yaml
6.5.3.1. Ingress Node Firewall configuration object

The fields for the Ingress Node Firewall configuration object are described in the following table:

Table 6.5. Ingress Node Firewall Configuration object
FieldTypeDescription

metadata.name

string

The name of the CR object. The name of the firewall rules object must be ingressnodefirewallconfig.

metadata.namespace

string

Namespace for the Ingress Firewall Operator CR object. The IngressNodeFirewallConfig CR must be created inside the openshift-ingress-node-firewall namespace.

spec.nodeSelector

string

A node selection constraint used to target nodes through specified node labels. For example:

spec:
  nodeSelector:
    node-role.kubernetes.io/worker: ""
Note

One label used in nodeSelector must match a label on the nodes in order for the daemon set to start. For example, if the node labels node-role.kubernetes.io/worker and node-type.kubernetes.io/vm are applied to a node, then at least one label must be set using nodeSelector for the daemon set to start.

spec.ebpfProgramManagerMode

boolean

Specifies if the Node Ingress Firewall Operator uses the eBPF Manager Operator or not to manage eBPF programs. This capability is a Technology Preview feature.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Note

The Operator consumes the CR and creates an ingress node firewall daemon set on all the nodes that match the nodeSelector.

Ingress Node Firewall Operator example configuration

A complete Ingress Node Firewall Configuration is specified in the following example:

Example Ingress Node Firewall Configuration object

apiVersion: ingressnodefirewall.openshift.io/v1alpha1
kind: IngressNodeFirewallConfig
metadata:
  name: ingressnodefirewallconfig
  namespace: openshift-ingress-node-firewall
spec:
  nodeSelector:
    node-role.kubernetes.io/worker: ""

Note

The Operator consumes the CR and creates an ingress node firewall daemon set on all the nodes that match the nodeSelector.

6.5.3.2. Ingress Node Firewall rules object

The fields for the Ingress Node Firewall rules object are described in the following table:

Table 6.6. Ingress Node Firewall rules object
FieldTypeDescription

metadata.name

string

The name of the CR object.

interfaces

array

The fields for this object specify the interfaces to apply the firewall rules to. For example, - en0 and - en1.

nodeSelector

array

You can use nodeSelector to select the nodes to apply the firewall rules to. Set the value of your named nodeselector labels to true to apply the rule.

ingress

object

ingress allows you to configure the rules that allow outside access to the services on your cluster.

Ingress object configuration

The values for the ingress object are defined in the following table:

Table 6.7. ingress object
FieldTypeDescription

sourceCIDRs

array

Allows you to set the CIDR block. You can configure multiple CIDRs from different address families.

Note

Different CIDRs allow you to use the same order rule. In the case that there are multiple IngressNodeFirewall objects for the same nodes and interfaces with overlapping CIDRs, the order field will specify which rule is applied first. Rules are applied in ascending order.

rules

array

Ingress firewall rules.order objects are ordered starting at 1 for each source.CIDR with up to 100 rules per CIDR. Lower order rules are executed first.

rules.protocolConfig.protocol supports the following protocols: TCP, UDP, SCTP, ICMP and ICMPv6. ICMP and ICMPv6 rules can match against ICMP and ICMPv6 types or codes. TCP, UDP, and SCTP rules can match against a single destination port or a range of ports using <start : end-1> format.

Set rules.action to allow to apply the rule or deny to disallow the rule.

Note

Ingress firewall rules are verified using a verification webhook that blocks any invalid configuration. The verification webhook prevents you from blocking any critical cluster services such as the API server.

Ingress Node Firewall rules object example

A complete Ingress Node Firewall configuration is specified in the following example:

Example Ingress Node Firewall configuration

apiVersion: ingressnodefirewall.openshift.io/v1alpha1
kind: IngressNodeFirewall
metadata:
  name: ingressnodefirewall
spec:
  interfaces:
  - eth0
  nodeSelector:
    matchLabels:
      <ingress_firewall_label_name>: <label_value> 1
  ingress:
  - sourceCIDRs:
       - 172.16.0.0/12
    rules:
    - order: 10
      protocolConfig:
        protocol: ICMP
        icmp:
          icmpType: 8 #ICMP Echo request
      action: Deny
    - order: 20
      protocolConfig:
        protocol: TCP
        tcp:
          ports: "8000-9000"
      action: Deny
  - sourceCIDRs:
       - fc00:f853:ccd:e793::0/64
    rules:
    - order: 10
      protocolConfig:
        protocol: ICMPv6
        icmpv6:
          icmpType: 128 #ICMPV6 Echo request
      action: Deny

1
A <label_name> and a <label_value> must exist on the node and must match the nodeselector label and value applied to the nodes you want the ingressfirewallconfig CR to run on. The <label_value> can be true or false. By using nodeSelector labels, you can target separate groups of nodes to apply different rules to using the ingressfirewallconfig CR.
Zero trust Ingress Node Firewall rules object example

Zero trust Ingress Node Firewall rules can provide additional security to multi-interface clusters. For example, you can use zero trust Ingress Node Firewall rules to drop all traffic on a specific interface except for SSH.

A complete configuration of a zero trust Ingress Node Firewall rule set is specified in the following example:

Important

Users need to add all ports their application will use to their allowlist in the following case to ensure proper functionality.

Example zero trust Ingress Node Firewall rules

apiVersion: ingressnodefirewall.openshift.io/v1alpha1
kind: IngressNodeFirewall
metadata:
 name: ingressnodefirewall-zero-trust
spec:
 interfaces:
 - eth1 1
 nodeSelector:
   matchLabels:
     <ingress_firewall_label_name>: <label_value> 2
 ingress:
 - sourceCIDRs:
      - 0.0.0.0/0 3
   rules:
   - order: 10
     protocolConfig:
       protocol: TCP
       tcp:
         ports: 22
     action: Allow
   - order: 20
     action: Deny 4

1
Network-interface cluster
2
The <label_name> and <label_value> needs to match the nodeSelector label and value applied to the specific nodes with which you wish to apply the ingressfirewallconfig CR.
3
0.0.0.0/0 set to match any CIDR
4
action set to Deny
Important

eBPF Manager Operator integration is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

6.5.4. Ingress Node Firewall Operator integration

The Ingress Node Firewall uses eBPF programs to implement some of its key firewall functionality. By default these eBPF programs are loaded into the kernel using a mechanism specific to the Ingress Node Firewall. You can configure the Ingress Node Firewall Operator to use the eBPF Manager Operator for loading and managing these programs instead.

When this integration is enabled, the following limitations apply:

  • The Ingress Node Firewall Operator uses TCX if XDP is not available and TCX is incompatible with bpfman.
  • The Ingress Node Firewall Operator daemon set pods remain in the ContainerCreating state until the firewall rules are applied.
  • The Ingress Node Firewall Operator daemon set pods run as privileged.

6.5.5. Configuring Ingress Node Firewall Operator to use the eBPF Manager Operator

The Ingress Node Firewall uses eBPF programs to implement some of its key firewall functionality. By default these eBPF programs are loaded into the kernel using a mechanism specific to the Ingress Node Firewall.

As a cluster administrator, you can configure the Ingress Node Firewall Operator to use the eBPF Manager Operator for loading and managing these programs instead, adding additional security and observability functionality.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have an account with administrator privileges.
  • You installed the Ingress Node Firewall Operator.
  • You have installed the eBPF Manager Operator.

Procedure

  1. Apply the following labels to the ingress-node-firewall-system namespace:

    $ oc label namespace openshift-ingress-node-firewall \
        pod-security.kubernetes.io/enforce=privileged \
        pod-security.kubernetes.io/warn=privileged --overwrite
  2. Edit the IngressNodeFirewallConfig object named ingressnodefirewallconfig and set the ebpfProgramManagerMode field:

    Ingress Node Firewall Operator configuration object

    apiVersion: ingressnodefirewall.openshift.io/v1alpha1
    kind: IngressNodeFirewallConfig
    metadata:
      name: ingressnodefirewallconfig
      namespace: openshift-ingress-node-firewall
    spec:
      nodeSelector:
        node-role.kubernetes.io/worker: ""
      ebpfProgramManagerMode: <ebpf_mode>

    where:

    <ebpf_mode>: Specifies whether or not the Ingress Node Firewall Operator uses the eBPF Manager Operator to manage eBPF programs. Must be either true or false. If unset, eBPF Manager is not used.

6.5.6. Viewing Ingress Node Firewall Operator rules

Procedure

  1. Run the following command to view all current rules :

    $ oc get ingressnodefirewall
  2. Choose one of the returned <resource> names and run the following command to view the rules or configs:

    $ oc get <resource> <name> -o yaml

6.5.7. Troubleshooting the Ingress Node Firewall Operator

  • Run the following command to list installed Ingress Node Firewall custom resource definitions (CRD):

    $ oc get crds | grep ingressnodefirewall

    Example output

    NAME               READY   UP-TO-DATE   AVAILABLE   AGE
    ingressnodefirewallconfigs.ingressnodefirewall.openshift.io       2022-08-25T10:03:01Z
    ingressnodefirewallnodestates.ingressnodefirewall.openshift.io    2022-08-25T10:03:00Z
    ingressnodefirewalls.ingressnodefirewall.openshift.io             2022-08-25T10:03:00Z

  • Run the following command to view the state of the Ingress Node Firewall Operator:

    $ oc get pods -n openshift-ingress-node-firewall

    Example output

    NAME                                       READY  STATUS         RESTARTS  AGE
    ingress-node-firewall-controller-manager   2/2    Running        0         5d21h
    ingress-node-firewall-daemon-pqx56         3/3    Running        0         5d21h

    The following fields provide information about the status of the Operator: READY, STATUS, AGE, and RESTARTS. The STATUS field is Running when the Ingress Node Firewall Operator is deploying a daemon set to the assigned nodes.

  • Run the following command to collect all ingress firewall node pods' logs:

    $ oc adm must-gather – gather_ingress_node_firewall

    The logs are available in the sos node’s report containing eBPF bpftool outputs at /sos_commands/ebpf. These reports include lookup tables used or updated as the ingress firewall XDP handles packet processing, updates statistics, and emits events.

6.5.8. Additional resources

6.6. eBPF manager Operator

6.6.1. About the eBPF Manager Operator

Important

eBPF Manager Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

6.6.1.1. About Extended Berkeley Packet Filter (eBPF)

eBPF extends the original Berkeley Packet Filter for advanced network traffic filtering. It acts as a virtual machine inside the Linux kernel, allowing you to run sandboxed programs in response to events such as network packets, system calls, or kernel functions.

6.6.1.2. About the eBPF Manager Operator

eBPF Manager simplifies the management and deployment of eBPF programs within Kubernetes, as well as enhancing the security around using eBPF programs. It utilizes Kubernetes custom resource definitions (CRDs) to manage eBPF programs packaged as OCI container images. This approach helps to delineate deployment permissions and enhance security by restricting program types deployable by specific users.

eBPF Manager is a software stack designed to manage eBPF programs within Kubernetes. It facilitates the loading, unloading, modifying, and monitoring of eBPF programs in Kubernetes clusters. It includes a daemon, CRDs, an agent, and an operator:

bpfman
A system daemon that manages eBPF programs via a gRPC API.
eBPF CRDs
A set of CRDs like XdpProgram and TcProgram for loading eBPF programs, and a bpfman-generated CRD (BpfProgram) for representing the state of loaded programs.
bpfman-agent
Runs within a daemonset container, ensuring eBPF programs on each node are in the desired state.
bpfman-operator
Manages the lifecycle of the bpfman-agent and CRDs in the cluster using the Operator SDK.

The eBPF Manager Operator offers the following features:

  • Enhances security by centralizing eBPF program loading through a controlled daemon. eBPF Manager has the elevated privileges so the applications don’t need to be. eBPF program control is regulated by standard Kubernetes Role-based access control (RBAC), which can allow or deny an application’s access to the different eBPF Manager CRDs that manage eBPF program loading and unloading.
  • Provides detailed visibility into active eBPF programs, improving your ability to debug issues across the system.
  • Facilitates the coexistence of multiple eBPF programs from different sources using protocols like libxdp for XDP and TC programs, enhancing interoperability.
  • Streamlines the deployment and lifecycle management of eBPF programs in Kubernetes. Developers can focus on program interaction rather than lifecycle management, with support for existing eBPF libraries like Cilium, libbpf, and Aya.
6.6.1.3. Additional resources
6.6.1.4. Next steps

6.6.2. Installing the eBPF Manager Operator

As a cluster administrator, you can install the eBPF Manager Operator by using the OpenShift Container Platform CLI or the web console.

Important

eBPF Manager Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

6.6.2.1. Installing the eBPF Manager Operator using the CLI

As a cluster administrator, you can install the Operator using the CLI.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have an account with administrator privileges.

Procedure

  1. To create the bpfman namespace, enter the following command:

    $ cat << EOF| oc create -f -
    apiVersion: v1
    kind: Namespace
    metadata:
      labels:
        pod-security.kubernetes.io/enforce: privileged
        pod-security.kubernetes.io/enforce-version: v1.24
      name: bpfman
    EOF
  2. To create an OperatorGroup CR, enter the following command:

    $ cat << EOF| oc create -f -
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: bpfman-operators
      namespace: bpfman
    EOF
  3. Subscribe to the eBPF Manager Operator.

    1. To create a Subscription CR for the eBPF Manager Operator, enter the following command:

      $ cat << EOF| oc create -f -
      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: bpfman-operator
        namespace: bpfman
      spec:
        name: bpfman-operator
        channel: alpha
        source: community-operators
        sourceNamespace: openshift-marketplace
      EOF
  4. To verify that the Operator is installed, enter the following command:

    $ oc get ip -n bpfman

    Example output

    NAME            CSV                                 APPROVAL    APPROVED
    install-ppjxl   security-profiles-operator.v0.8.5   Automatic   true

  5. To verify the version of the Operator, enter the following command:

    $ oc get csv -n bpfman

    Example output

    NAME                                DISPLAY                      VERSION   REPLACES                            PHASE
    bpfman-operator.v0.5.0              eBPF Manager Operator              0.5.0     bpfman-operator.v0.4.2              Succeeded

6.6.2.2. Installing the eBPF Manager Operator using the web console

As a cluster administrator, you can install the eBPF Manager Operator using the web console.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have an account with administrator privileges.

Procedure

  1. Install the eBPF Manager Operator:

    1. In the OpenShift Container Platform web console, click OperatorsOperatorHub.
    2. Select eBPF Manager Operator from the list of available Operators, and if prompted to Show community Operator, click Continue.
    3. Click Install.
    4. On the Install Operator page, under Installed Namespace, select Operator recommended Namespace.
    5. Click Install.
  2. Verify that the eBPF Manager Operator is installed successfully:

    1. Navigate to the OperatorsInstalled Operators page.
    2. Ensure that eBPF Manager Operator is listed in the openshift-ingress-node-firewall project with a Status of InstallSucceeded.

      Note

      During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message.

      If the Operator does not have a Status of InstallSucceeded, troubleshoot using the following steps:

      • Inspect the Operator Subscriptions and Install Plans tabs for any failures or errors under Status.
      • Navigate to the WorkloadsPods page and check the logs for pods in the bpfman project.
6.6.2.3. Next steps

6.6.3. Deploying an eBPF program

As a cluster administrator, you can deploy containerized eBPF applications with the eBPF Manager Operator.

For the example eBPF program deployed in this procedure, the sample manifest does the following:

First, it creates basic Kubernetes objects like Namespace, ServiceAccount, and ClusterRoleBinding. It also creates a XdpProgram object, which is a custom resource definition (CRD) that eBPF Manager provides, that loads the eBPF XDP program. Each program type has it’s own CRD, but they are similar in what they do. For more information, see Loading eBPF Programs On Kubernetes.

Second, it creates a daemon set which runs a user space program that reads the eBPF maps that the eBPF program is populating. This eBPF map is volume mounted using a Container Storage Interface (CSI) driver. By volume mounting the eBPF map in the container in lieu of accessing it on the host, the application pod can access the eBPF maps without being privileged. For more information on how the CSI is configured, see See Deploying an eBPF enabled application On Kubernetes.

Important

eBPF Manager Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

6.6.3.1. Deploying a containerized eBPF program

As a cluster administrator, you can deploy an eBPF program to nodes on your cluster. In this procedure, a sample containerized eBPF program is installed in the go-xdp-counter namespace.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have an account with administrator privileges.
  • You have installed the eBPF Manager Operator.

Procedure

  1. To download the manifest, enter the following command:

    $ curl -L https://github.com/bpfman/bpfman/releases/download/v0.5.1/go-xdp-counter-install-selinux.yaml -o go-xdp-counter-install-selinux.yaml
  2. To deploy the sample eBPF application, enter the following command:

    $ oc create -f go-xdp-counter-install-selinux.yaml

    Example output

    namespace/go-xdp-counter created
    serviceaccount/bpfman-app-go-xdp-counter created
    clusterrolebinding.rbac.authorization.k8s.io/xdp-binding created
    daemonset.apps/go-xdp-counter-ds created
    xdpprogram.bpfman.io/go-xdp-counter-example created
    selinuxprofile.security-profiles-operator.x-k8s.io/bpfman-secure created

  3. To confirm that the eBPF sample application deployed successfully, enter the following command:

    $ oc get all -o wide -n go-xdp-counter

    Example output

    NAME                          READY   STATUS    RESTARTS   AGE   IP             NODE                                 NOMINATED NODE   READINESS GATES
    pod/go-xdp-counter-ds-4m9cw   1/1     Running   0          44s   10.129.0.92    ci-ln-dcbq7d2-72292-ztrkp-master-1   <none>           <none>
    pod/go-xdp-counter-ds-7hzww   1/1     Running   0          44s   10.130.0.86    ci-ln-dcbq7d2-72292-ztrkp-master-2   <none>           <none>
    pod/go-xdp-counter-ds-qm9zx   1/1     Running   0          44s   10.128.0.101   ci-ln-dcbq7d2-72292-ztrkp-master-0   <none>           <none>
    
    NAME                               DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE   CONTAINERS       IMAGES                                           SELECTOR
    daemonset.apps/go-xdp-counter-ds   3         3         3       3            3           <none>          44s   go-xdp-counter   quay.io/bpfman-userspace/go-xdp-counter:v0.5.0   name=go-xdp-counter

  4. To confirm that the example XDP program is running, enter the following command:

    $ oc get xdpprogram go-xdp-counter-example

    Example output

    NAME                     BPFFUNCTIONNAME   NODESELECTOR   STATUS
    go-xdp-counter-example   xdp_stats         {}             ReconcileSuccess

  5. To confirm that the XDP program is collecting data, enter the following command:

    $ oc logs <pod_name> -n go-xdp-counter

    Replace <pod_name> with the name of a XDP program pod, such as go-xdp-counter-ds-4m9cw.

    Example output

    2024/08/13 15:20:06 15016 packets received
    2024/08/13 15:20:06 93581579 bytes received
    
    2024/08/13 15:20:09 19284 packets received
    2024/08/13 15:20:09 99638680 bytes received
    
    2024/08/13 15:20:12 23522 packets received
    2024/08/13 15:20:12 105666062 bytes received
    
    2024/08/13 15:20:15 27276 packets received
    2024/08/13 15:20:15 112028608 bytes received
    
    2024/08/13 15:20:18 29470 packets received
    2024/08/13 15:20:18 112732299 bytes received
    
    2024/08/13 15:20:21 32588 packets received
    2024/08/13 15:20:21 113813781 bytes received

6.7. Egress Firewall

6.7.1. Viewing an egress firewall for a project

As a cluster administrator, you can list the names of any existing egress firewalls and view the traffic rules for a specific egress firewall.

6.7.1.1. Viewing an EgressFirewall object

You can view an EgressFirewall object in your cluster.

Prerequisites

  • A cluster using the OVN-Kubernetes network plugin.
  • Install the OpenShift Command-line Interface (CLI), commonly known as oc.
  • You must log in to the cluster.

Procedure

  1. Optional: To view the names of the EgressFirewall objects defined in your cluster, enter the following command:

    $ oc get egressfirewall --all-namespaces
  2. To inspect a policy, enter the following command. Replace <policy_name> with the name of the policy to inspect.

    $ oc describe egressfirewall <policy_name>

    Example output

    Name:		default
    Namespace:	project1
    Created:	20 minutes ago
    Labels:		<none>
    Annotations:	<none>
    Rule:		Allow to 1.2.3.0/24
    Rule:		Allow to www.example.com
    Rule:		Deny to 0.0.0.0/0

6.7.2. Editing an egress firewall for a project

As a cluster administrator, you can modify network traffic rules for an existing egress firewall.

6.7.2.1. Editing an EgressFirewall object

As a cluster administrator, you can update the egress firewall for a project.

Prerequisites

  • A cluster using the OVN-Kubernetes network plugin.
  • Install the OpenShift CLI (oc).
  • You must log in to the cluster as a cluster administrator.

Procedure

  1. Find the name of the EgressFirewall object for the project. Replace <project> with the name of the project.

    $ oc get -n <project> egressfirewall
  2. Optional: If you did not save a copy of the EgressFirewall object when you created the egress network firewall, enter the following command to create a copy.

    $ oc get -n <project> egressfirewall <name> -o yaml > <filename>.yaml

    Replace <project> with the name of the project. Replace <name> with the name of the object. Replace <filename> with the name of the file to save the YAML to.

  3. After making changes to the policy rules, enter the following command to replace the EgressFirewall object. Replace <filename> with the name of the file containing the updated EgressFirewall object.

    $ oc replace -f <filename>.yaml

6.7.3. Removing an egress firewall from a project

As a cluster administrator, you can remove an egress firewall from a project to remove all restrictions on network traffic from the project that leaves the OpenShift Container Platform cluster.

6.7.3.1. Removing an EgressFirewall object

As a cluster administrator, you can remove an egress firewall from a project.

Prerequisites

  • A cluster using the OVN-Kubernetes network plugin.
  • Install the OpenShift CLI (oc).
  • You must log in to the cluster as a cluster administrator.

Procedure

  1. Find the name of the EgressFirewall object for the project. Replace <project> with the name of the project.

    $ oc get -n <project> egressfirewall
  2. Enter the following command to delete the EgressFirewall object. Replace <project> with the name of the project and <name> with the name of the object.

    $ oc delete -n <project> egressfirewall <name>

6.7.4. Configuring an egress firewall for a project

As a cluster administrator, you can create an egress firewall for a project that restricts egress traffic leaving your OpenShift Container Platform cluster.

6.7.4.1. How an egress firewall works in a project

As a cluster administrator, you can use an egress firewall to limit the external hosts that some or all pods can access from within the cluster. An egress firewall supports the following scenarios:

  • A pod can only connect to internal hosts and cannot initiate connections to the public internet.
  • A pod can only connect to the public internet and cannot initiate connections to internal hosts that are outside the OpenShift Container Platform cluster.
  • A pod cannot reach specified internal subnets or hosts outside the OpenShift Container Platform cluster.
  • A pod can connect to only specific external hosts.

For example, you can allow one project access to a specified IP range but deny the same access to a different project. Or you can restrict application developers from updating from Python pip mirrors, and force updates to come only from approved sources.

Note

Egress firewall does not apply to the host network namespace. Pods with host networking enabled are unaffected by egress firewall rules.

You configure an egress firewall policy by creating an EgressFirewall custom resource (CR) object. The egress firewall matches network traffic that meets any of the following criteria:

  • An IP address range in CIDR format
  • A DNS name that resolves to an IP address
  • A port number
  • A protocol that is one of the following protocols: TCP, UDP, and SCTP
Important

If your egress firewall includes a deny rule for 0.0.0.0/0, access to your OpenShift Container Platform API servers is blocked. You must either add allow rules for each IP address or use the nodeSelector type allow rule in your egress policy rules to connect to API servers.

The following example illustrates the order of the egress firewall rules necessary to ensure API server access:

apiVersion: k8s.ovn.org/v1
kind: EgressFirewall
metadata:
  name: default
  namespace: <namespace> 1
spec:
  egress:
  - to:
      cidrSelector: <api_server_address_range> 2
    type: Allow
# ...
  - to:
      cidrSelector: 0.0.0.0/0 3
    type: Deny
1
The namespace for the egress firewall.
2
The IP address range that includes your OpenShift Container Platform API servers.
3
A global deny rule prevents access to the OpenShift Container Platform API servers.

To find the IP address for your API servers, run oc get ep kubernetes -n default.

For more information, see BZ#1988324.

Warning

Egress firewall rules do not apply to traffic that goes through routers. Any user with permission to create a Route CR object can bypass egress firewall policy rules by creating a route that points to a forbidden destination.

6.7.4.1.1. Limitations of an egress firewall

An egress firewall has the following limitations:

  • No project can have more than one EgressFirewall object.
  • A maximum of one EgressFirewall object with a maximum of 8,000 rules can be defined per project.
  • If you are using the OVN-Kubernetes network plugin with shared gateway mode in Red Hat OpenShift Networking, return ingress replies are affected by egress firewall rules. If the egress firewall rules drop the ingress reply destination IP, the traffic is dropped.

Violating any of these restrictions results in a broken egress firewall for the project. Consequently, all external network traffic is dropped, which can cause security risks for your organization.

An Egress Firewall resource can be created in the kube-node-lease, kube-public, kube-system, openshift and openshift- projects.

6.7.4.1.2. Matching order for egress firewall policy rules

The egress firewall policy rules are evaluated in the order that they are defined, from first to last. The first rule that matches an egress connection from a pod applies. Any subsequent rules are ignored for that connection.

6.7.4.1.3. How Domain Name Server (DNS) resolution works

If you use DNS names in any of your egress firewall policy rules, proper resolution of the domain names is subject to the following restrictions:

  • Domain name updates are polled based on a time-to-live (TTL) duration. By default, the duration is 30 minutes. When the egress firewall controller queries the local name servers for a domain name, if the response includes a TTL and the TTL is less than 30 minutes, the controller sets the duration for that DNS name to the returned value. Each DNS name is queried after the TTL for the DNS record expires.
  • The pod must resolve the domain from the same local name servers when necessary. Otherwise the IP addresses for the domain known by the egress firewall controller and the pod can be different. If the IP addresses for a hostname differ, the egress firewall might not be enforced consistently.
  • Because the egress firewall controller and pods asynchronously poll the same local name server, the pod might obtain the updated IP address before the egress controller does, which causes a race condition. Due to this current limitation, domain name usage in EgressFirewall objects is only recommended for domains with infrequent IP address changes.
Note

Using DNS names in your egress firewall policy does not affect local DNS resolution through CoreDNS.

However, if your egress firewall policy uses domain names, and an external DNS server handles DNS resolution for an affected pod, you must include egress firewall rules that permit access to the IP addresses of your DNS server.

6.7.4.1.3.1. Improved DNS resolution and resolving wildcard domain names

There might be situations where the IP addresses associated with a DNS record change frequently, or you might want to specify wildcard domain names in your egress firewall policy rules.

In this situation, the OVN-Kubernetes cluster manager creates a DNSNameResolver custom resource object for each unique DNS name used in your egress firewall policy rules. This custom resource stores the following information:

Important

Improved DNS resolution for egress firewall rules is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Example DNSNameResolver CR definition

apiVersion: networking.openshift.io/v1alpha1
kind: DNSNameResolver
spec:
  name: www.example.com. 1
status:
  resolvedNames:
  - dnsName: www.example.com. 2
    resolvedAddress:
    - ip: "1.2.3.4" 3
      ttlSeconds: 60 4
      lastLookupTime: "2023-08-08T15:07:04Z" 5

1
The DNS name. This can be either a standard DNS name or a wildcard DNS name. For a wildcard DNS name, the DNS name resolution information contains all of the DNS names that match the wildcard DNS name.
2
The resolved DNS name matching the spec.name field. If the spec.name field contains a wildcard DNS name, then multiple dnsName entries are created that contain the standard DNS names that match the wildcard DNS name when resolved. If the wildcard DNS name can also be successfully resolved, then this field also stores the wildcard DNS name.
3
The current IP addresses associated with the DNS name.
4
The last time-to-live (TTL) duration.
5
The last lookup time.

If during DNS resolution the DNS name in the query matches any name defined in a DNSNameResolver CR, then the previous information is updated accordingly in the CR status field. For unsuccessful DNS wildcard name lookups, the request is retried after a default TTL of 30 minutes.

The OVN-Kubernetes cluster manager watches for updates to an EgressFirewall custom resource object, and creates, modifies, or deletes DNSNameResolver CRs associated with those egress firewall policies when that update occurs.

Warning

Do not modify DNSNameResolver custom resources directly. This can lead to unwanted behavior of your egress firewall.

6.7.4.2. EgressFirewall custom resource (CR) object

You can define one or more rules for an egress firewall. A rule is either an Allow rule or a Deny rule, with a specification for the traffic that the rule applies to.

The following YAML describes an EgressFirewall CR object:

EgressFirewall object

apiVersion: k8s.ovn.org/v1
kind: EgressFirewall
metadata:
  name: <name> 1
spec:
  egress: 2
    ...

1
The name for the object must be default.
2
A collection of one or more egress network policy rules as described in the following section.
6.7.4.2.1. EgressFirewall rules

The following YAML describes an egress firewall rule object. The user can select either an IP address range in CIDR format, a domain name, or use the nodeSelector to allow or deny egress traffic. The egress stanza expects an array of one or more objects.

Egress policy rule stanza

egress:
- type: <type> 1
  to: 2
    cidrSelector: <cidr> 3
    dnsName: <dns_name> 4
    nodeSelector: <label_name>: <label_value> 5
  ports: 6
      ...

1
The type of rule. The value must be either Allow or Deny.
2
A stanza describing an egress traffic match rule that specifies the cidrSelector field or the dnsName field. You cannot use both fields in the same rule.
3
An IP address range in CIDR format.
4
A DNS domain name.
5
Labels are key/value pairs that the user defines. Labels are attached to objects, such as pods. The nodeSelector allows for one or more node labels to be selected and attached to pods.
6
Optional: A stanza describing a collection of network ports and protocols for the rule.

Ports stanza

ports:
- port: <port> 1
  protocol: <protocol> 2

1
A network port, such as 80 or 443. If you specify a value for this field, you must also specify a value for protocol.
2
A network protocol. The value must be either TCP, UDP, or SCTP.
6.7.4.2.2. Example EgressFirewall CR objects

The following example defines several egress firewall policy rules:

apiVersion: k8s.ovn.org/v1
kind: EgressFirewall
metadata:
  name: default
spec:
  egress: 1
  - type: Allow
    to:
      cidrSelector: 1.2.3.0/24
  - type: Deny
    to:
      cidrSelector: 0.0.0.0/0
1
A collection of egress firewall policy rule objects.

The following example defines a policy rule that denies traffic to the host at the 172.16.1.1 IP address, if the traffic is using either the TCP protocol and destination port 80 or any protocol and destination port 443.

apiVersion: k8s.ovn.org/v1
kind: EgressFirewall
metadata:
  name: default
spec:
  egress:
  - type: Deny
    to:
      cidrSelector: 172.16.1.1
    ports:
    - port: 80
      protocol: TCP
    - port: 443
6.7.4.2.3. Example nodeSelector for EgressFirewall

As a cluster administrator, you can allow or deny egress traffic to nodes in your cluster by specifying a label using nodeSelector. Labels can be applied to one or more nodes. The following is an example with the region=east label:

apiVersion: k8s.ovn.org/v1
kind: EgressFirewall
metadata:
  name: default
spec:
    egress:
    - to:
        nodeSelector:
          matchLabels:
            region: east
      type: Allow
Tip

Instead of adding manual rules per node IP address, use node selectors to create a label that allows pods behind an egress firewall to access host network pods.

6.7.4.3. Creating an egress firewall policy object

As a cluster administrator, you can create an egress firewall policy object for a project.

Important

If the project already has an EgressFirewall object defined, you must edit the existing policy to make changes to the egress firewall rules.

Prerequisites

  • A cluster that uses the OVN-Kubernetes network plugin.
  • Install the OpenShift CLI (oc).
  • You must log in to the cluster as a cluster administrator.

Procedure

  1. Create a policy rule:

    1. Create a <policy_name>.yaml file where <policy_name> describes the egress policy rules.
    2. In the file you created, define an egress policy object.
  2. Enter the following command to create the policy object. Replace <policy_name> with the name of the policy and <project> with the project that the rule applies to.

    $ oc create -f <policy_name>.yaml -n <project>

    In the following example, a new EgressFirewall object is created in a project named project1:

    $ oc create -f default.yaml -n project1

    Example output

    egressfirewall.k8s.ovn.org/v1 created

  3. Optional: Save the <policy_name>.yaml file so that you can make changes later.

6.8. Configuring IPsec encryption

By enabling IPsec, you can encrypt both internal pod-to-pod cluster traffic between nodes and external traffic between pods and IPsec endpoints external to your cluster. All pod-to-pod network traffic between nodes on the OVN-Kubernetes cluster network is encrypted with IPsec in Transport mode.

IPsec is disabled by default. You can enable IPsec either during or after installing the cluster. For information about cluster installation, see OpenShift Container Platform installation overview.

The following support limitations exist for IPsec on a OpenShift Container Platform cluster:

  • You must disable IPsec before updating to OpenShift Container Platform 4.15. There is a known issue that can cause interruptions in pod-to-pod communication if you update without disabling IPsec. (OCPBUGS-43323)
  • On IBM Cloud®, IPsec supports only NAT-T. Encapsulating Security Payload (ESP) is not supported on this platform.
  • If your cluster uses hosted control planes for Red Hat OpenShift Container Platform, IPsec is not supported for IPsec encryption of either pod-to-pod or traffic to external hosts.
  • Using ESP hardware offloading on any network interface is not supported if one or more of those interfaces is attached to Open vSwitch (OVS). Enabling IPsec for your cluster triggers the use of IPsec with interfaces attached to OVS. By default, OpenShift Container Platform disables ESP hardware offloading on any interfaces attached to OVS.
  • If you enabled IPsec for network interfaces that are not attached to OVS, a cluster administrator must manually disable ESP hardware offloading on each interface that is not attached to OVS.

The following list outlines key tasks in the IPsec documentation:

  • Enable and disable IPsec after cluster installation.
  • Configure IPsec encryption for traffic between the cluster and external hosts.
  • Verify that IPsec encrypts traffic between pods on different nodes.

6.8.1. Modes of operation

When using IPsec on your OpenShift Container Platform cluster, you can choose from the following operating modes:

Table 6.8. IPsec modes of operation
ModeDescriptionDefault

Disabled

No traffic is encrypted. This is the cluster default.

Yes

Full

Pod-to-pod traffic is encrypted as described in "Types of network traffic flows encrypted by pod-to-pod IPsec". Traffic to external nodes may be encrypted after you complete the required configuration steps for IPsec.

No

External

Traffic to external nodes may be encrypted after you complete the required configuration steps for IPsec.

No

6.8.2. Prerequisites

For IPsec support for encrypting traffic to external hosts, ensure that the following prerequisites are met:

  • The OVN-Kubernetes network plugin must be configured in local gateway mode, where ovnKubernetesConfig.gatewayConfig.routingViaHost=true.
  • The NMState Operator is installed. This Operator is required for specifying the IPsec configuration. For more information, see About the Kubernetes NMState Operator.

    Note

    The NMState Operator is supported on Google Cloud Platform (GCP) only for configuring IPsec.

  • The Butane tool (butane) is installed. To install Butane, see Installing Butane.

These prerequisites are required to add certificates into the host NSS database and to configure IPsec to communicate with external hosts.

6.8.3. Network connectivity requirements when IPsec is enabled

You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster.

Table 6.9. Ports used for all-machine to all-machine communications
ProtocolPortDescription

UDP

500

IPsec IKE packets

4500

IPsec NAT-T packets

ESP

N/A

IPsec Encapsulating Security Payload (ESP)

6.8.4. IPsec encryption for pod-to-pod traffic

For IPsec encryption of pod-to-pod traffic, the following sections describe which specific pod-to-pod traffic is encrypted, what kind of encryption protocol is used, and how X.509 certificates are handled. These sections do not apply to IPsec encryption between the cluster and external hosts, which you must configure manually for your specific external network infrastructure.

6.8.4.1. Types of network traffic flows encrypted by pod-to-pod IPsec

With IPsec enabled, only the following network traffic flows between pods are encrypted:

  • Traffic between pods on different nodes on the cluster network
  • Traffic from a pod on the host network to a pod on the cluster network

The following traffic flows are not encrypted:

  • Traffic between pods on the same node on the cluster network
  • Traffic between pods on the host network
  • Traffic from a pod on the cluster network to a pod on the host network

The encrypted and unencrypted flows are illustrated in the following diagram:

IPsec encrypted and unencrypted traffic flows
6.8.4.2. Encryption protocol and IPsec mode

The encrypt cipher used is AES-GCM-16-256. The integrity check value (ICV) is 16 bytes. The key length is 256 bits.

The IPsec mode used is Transport mode, a mode that encrypts end-to-end communication by adding an Encapsulated Security Payload (ESP) header to the IP header of the original packet and encrypts the packet data. OpenShift Container Platform does not currently use or support IPsec Tunnel mode for pod-to-pod communication.

6.8.4.3. Security certificate generation and rotation

The Cluster Network Operator (CNO) generates a self-signed X.509 certificate authority (CA) that is used by IPsec for encryption. Certificate signing requests (CSRs) from each node are automatically fulfilled by the CNO.

The CA is valid for 10 years. The individual node certificates are valid for 5 years and are automatically rotated after 4 1/2 years elapse.

6.8.5. IPsec encryption for external traffic

OpenShift Container Platform supports IPsec encryption for traffic to external hosts with TLS certificates that you must supply.

6.8.5.1. Supported platforms

This feature is supported on the following platforms:

  • Bare metal
  • Google Cloud Platform (GCP)
  • Red Hat OpenStack Platform (RHOSP)
  • VMware vSphere
Important

If you have Red Hat Enterprise Linux (RHEL) worker nodes, these do not support IPsec encryption for external traffic.

If your cluster uses hosted control planes for Red Hat OpenShift Container Platform, configuring IPsec for encrypting traffic to external hosts is not supported.

6.8.5.2. Limitations

Ensure that the following prohibitions are observed:

  • IPv6 configuration is not currently supported by the NMState Operator when configuring IPsec for external traffic.
  • Certificate common names (CN) in the provided certificate bundle must not begin with the ovs_ prefix, because this naming can conflict with pod-to-pod IPsec CN names in the Network Security Services (NSS) database of each node.

6.8.6. Enabling IPsec encryption

As a cluster administrator, you can enable pod-to-pod IPsec encryption and IPsec encryption between the cluster and external IPsec endpoints.

You can configure IPsec in either of the following modes:

  • Full: Encryption for pod-to-pod and external traffic
  • External: Encryption for external traffic

If you need to configure encryption for external traffic in addition to pod-to-pod traffic, you must also complete the "Configuring IPsec encryption for external traffic" procedure.

Prerequisites

  • Install the OpenShift CLI (oc).
  • You are logged in to the cluster as a user with cluster-admin privileges.
  • You have reduced the size of your cluster MTU by 46 bytes to allow for the overhead of the IPsec ESP header.

Procedure

  1. To enable IPsec encryption, enter the following command:

    $ oc patch networks.operator.openshift.io cluster --type=merge \
    -p '{
      "spec":{
        "defaultNetwork":{
          "ovnKubernetesConfig":{
            "ipsecConfig":{
              "mode":<mode>
            }}}}}'

    where:

    mode
    Specify External to encrypt only traffic to external hosts or specify Full to encrypt pod to pod traffic and optionally traffic to external hosts. By default, IPsec is disabled.
  2. Optional: If you need to encrypt traffic to external hosts, complete the "Configuring IPsec encryption for external traffic" procedure.

Verification

  1. To find the names of the OVN-Kubernetes data plane pods, enter the following command:

    $ oc get pods -n openshift-ovn-kubernetes -l=app=ovnkube-node

    Example output

    ovnkube-node-5xqbf                       8/8     Running   0              28m
    ovnkube-node-6mwcx                       8/8     Running   0              29m
    ovnkube-node-ck5fr                       8/8     Running   0              31m
    ovnkube-node-fr4ld                       8/8     Running   0              26m
    ovnkube-node-wgs4l                       8/8     Running   0              33m
    ovnkube-node-zfvcl                       8/8     Running   0              34m

  2. Verify that IPsec is enabled on your cluster by running the following command:

    Note

    As a cluster administrator, you can verify that IPsec is enabled between pods on your cluster when IPsec is configured in Full mode. This step does not verify whether IPsec is working between your cluster and external hosts.

    $ oc -n openshift-ovn-kubernetes rsh ovnkube-node-<XXXXX> ovn-nbctl --no-leader-only get nb_global . ipsec

    where:

    <XXXXX>
    Specifies the random sequence of letters for a pod from the previous step.

    Example output

    true

6.8.7. Configuring IPsec encryption for external traffic

As a cluster administrator, to encrypt external traffic with IPsec you must configure IPsec for your network infrastructure, including providing PKCS#12 certificates. Because this procedure uses Butane to create machine configs, you must have the butane command installed.

Note

After you apply the machine config, the Machine Config Operator reboots affected nodes in your cluster to rollout the new machine config.

Prerequisites

  • Install the OpenShift CLI (oc).
  • You have installed the butane utility on your local computer.
  • You have installed the NMState Operator on the cluster.
  • You are logged in to the cluster as a user with cluster-admin privileges.
  • You have an existing PKCS#12 certificate for the IPsec endpoint and a CA cert in PEM format.
  • You enabled IPsec in either Full or External mode on your cluster.
  • The OVN-Kubernetes network plugin must be configured in local gateway mode, where ovnKubernetesConfig.gatewayConfig.routingViaHost=true.

Procedure

  1. Create an IPsec configuration with an NMState Operator node network configuration policy. For more information, see Libreswan as an IPsec VPN implementation.

    1. To identify the IP address of the cluster node that is the IPsec endpoint, enter the following command:

      $ oc get nodes
    2. Create a file named ipsec-config.yaml that contains a node network configuration policy for the NMState Operator, such as in the following examples. For an overview about NodeNetworkConfigurationPolicy objects, see The Kubernetes NMState project.

      Example NMState IPsec transport configuration

      apiVersion: nmstate.io/v1
      kind: NodeNetworkConfigurationPolicy
      metadata:
        name: ipsec-config
      spec:
        nodeSelector:
          kubernetes.io/hostname: "<hostname>" 1
        desiredState:
          interfaces:
          - name: <interface_name> 2
            type: ipsec
            libreswan:
              left: <cluster_node> 3
              leftid: '%fromcert'
              leftrsasigkey: '%cert'
              leftcert: left_server
              leftmodecfgclient: false
              right: <external_host> 4
              rightid: '%fromcert'
              rightrsasigkey: '%cert'
              rightsubnet: <external_address>/32 5
              ikev2: insist
              type: transport

      1
      Specifies the host name to apply the policy to. This host serves as the left side host in the IPsec configuration.
      2
      Specifies the name of the interface to create on the host.
      3
      Specifies the host name of the cluster node that terminates the IPsec tunnel on the cluster side. The name should match SAN [Subject Alternate Name] from your supplied PKCS#12 certificates.
      4
      Specifies the external host name, such as host.example.com. The name should match the SAN [Subject Alternate Name] from your supplied PKCS#12 certificates.
      5
      Specifies the IP address of the external host, such as 10.1.2.3/32.

      Example NMState IPsec tunnel configuration

      apiVersion: nmstate.io/v1
      kind: NodeNetworkConfigurationPolicy
      metadata:
        name: ipsec-config
      spec:
        nodeSelector:
          kubernetes.io/hostname: "<hostname>" 1
        desiredState:
          interfaces:
          - name: <interface_name> 2
            type: ipsec
            libreswan:
              left: <cluster_node> 3
              leftid: '%fromcert'
              leftmodecfgclient: false
              leftrsasigkey: '%cert'
              leftcert: left_server
              right: <external_host> 4
              rightid: '%fromcert'
              rightrsasigkey: '%cert'
              rightsubnet: <external_address>/32 5
              ikev2: insist
              type: tunnel

      1
      Specifies the host name to apply the policy to. This host serves as the left side host in the IPsec configuration.
      2
      Specifies the name of the interface to create on the host.
      3
      Specifies the host name of the cluster node that terminates the IPsec tunnel on the cluster side. The name should match SAN [Subject Alternate Name] from your supplied PKCS#12 certificates.
      4
      Specifies the external host name, such as host.example.com. The name should match the SAN [Subject Alternate Name] from your supplied PKCS#12 certificates.
      5
      Specifies the IP address of the external host, such as 10.1.2.3/32.
    3. To configure the IPsec interface, enter the following command:

      $ oc create -f ipsec-config.yaml
  2. Provide the following certificate files to add to the Network Security Services (NSS) database on each host. These files are imported as part of the Butane configuration in subsequent steps.

    • left_server.p12: The certificate bundle for the IPsec endpoints
    • ca.pem: The certificate authority that you signed your certificates with
  3. Create a machine config to add your certificates to the cluster:

    1. To create Butane config files for the control plane and worker nodes, enter the following command:

      $ for role in master worker; do
        cat >> "99-ipsec-${role}-endpoint-config.bu" <<-EOF
        variant: openshift
        version: 4.17.0
        metadata:
          name: 99-${role}-import-certs
          labels:
            machineconfiguration.openshift.io/role: $role
        systemd:
          units:
          - name: ipsec-import.service
            enabled: true
            contents: |
              [Unit]
              Description=Import external certs into ipsec NSS
              Before=ipsec.service
      
              [Service]
              Type=oneshot
              ExecStart=/usr/local/bin/ipsec-addcert.sh
              RemainAfterExit=false
              StandardOutput=journal
      
              [Install]
              WantedBy=multi-user.target
        storage:
          files:
          - path: /etc/pki/certs/ca.pem
            mode: 0400
            overwrite: true
            contents:
              local: ca.pem
          - path: /etc/pki/certs/left_server.p12
            mode: 0400
            overwrite: true
            contents:
              local: left_server.p12
          - path: /usr/local/bin/ipsec-addcert.sh
            mode: 0740
            overwrite: true
            contents:
              inline: |
                #!/bin/bash -e
                echo "importing cert to NSS"
                certutil -A -n "CA" -t "CT,C,C" -d /var/lib/ipsec/nss/ -i /etc/pki/certs/ca.pem
                pk12util -W "" -i /etc/pki/certs/left_server.p12 -d /var/lib/ipsec/nss/
                certutil -M -n "left_server" -t "u,u,u" -d /var/lib/ipsec/nss/
      EOF
      done
    2. To transform the Butane files that you created in the previous step into machine configs, enter the following command:

      $ for role in master worker; do
        butane -d . 99-ipsec-${role}-endpoint-config.bu -o ./99-ipsec-$role-endpoint-config.yaml
      done
  4. To apply the machine configs to your cluster, enter the following command:

    $ for role in master worker; do
      oc apply -f 99-ipsec-${role}-endpoint-config.yaml
    done
    Important

    As the Machine Config Operator (MCO) updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated before external IPsec connectivity is available.

  5. Check the machine config pool status by entering the following command:

    $ oc get mcp

    A successfully updated node has the following status: UPDATED=true, UPDATING=false, DEGRADED=false.

    Note

    By default, the MCO updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster.

  6. To confirm that IPsec machine configs rolled out successfully, enter the following commands:

    1. Confirm that the IPsec machine configs were created:

      $ oc get mc | grep ipsec

      Example output

      80-ipsec-master-extensions        3.2.0        6d15h
      80-ipsec-worker-extensions        3.2.0        6d15h

    2. Confirm that the that the IPsec extension are applied to control plane nodes:

      $ oc get mcp master -o yaml | grep 80-ipsec-master-extensions -c

      Expected output

      2

    3. Confirm that the that the IPsec extension are applied to worker nodes:

      $ oc get mcp worker -o yaml | grep 80-ipsec-worker-extensions -c

      Expected output

      2

Additional resources

6.8.8. Disabling IPsec encryption for an external IPsec endpoint

As a cluster administrator, you can remove an existing IPsec tunnel to an external host.

Prerequisites

  • Install the OpenShift CLI (oc).
  • You are logged in to the cluster as a user with cluster-admin privileges.
  • You enabled IPsec in either Full or External mode on your cluster.

Procedure

  1. Create a file named remove-ipsec-tunnel.yaml with the following YAML:

    kind: NodeNetworkConfigurationPolicy
    apiVersion: nmstate.io/v1
    metadata:
      name: <name>
    spec:
      nodeSelector:
        kubernetes.io/hostname: <node_name>
      desiredState:
        interfaces:
        - name: <tunnel_name>
          type: ipsec
          state: absent

    where:

    name
    Specifies a name for the node network configuration policy.
    node_name
    Specifies the name of the node where the IPsec tunnel that you want to remove exists.
    tunnel_name
    Specifies the interface name for the existing IPsec tunnel.
  2. To remove the IPsec tunnel, enter the following command:

    $ oc apply -f remove-ipsec-tunnel.yaml

6.8.9. Disabling IPsec encryption

As a cluster administrator, you can disable IPsec encryption.

Prerequisites

  • Install the OpenShift CLI (oc).
  • Log in to the cluster with a user with cluster-admin privileges.

Procedure

  1. To disable IPsec encryption, enter the following command:

    $ oc patch networks.operator.openshift.io cluster --type=merge \
    -p '{
      "spec":{
        "defaultNetwork":{
          "ovnKubernetesConfig":{
            "ipsecConfig":{
              "mode":"Disabled"
            }}}}}'
  2. Optional: You can increase the size of your cluster MTU by 46 bytes because there is no longer any overhead from the IPsec ESP header in IP packets.

6.8.10. Additional resources

Chapter 7. Cluster Network Operator in OpenShift Container Platform

You can use the Cluster Network Operator (CNO) to deploy and manage cluster network components on an OpenShift Container Platform cluster, including the Container Network Interface (CNI) network plugin selected for the cluster during installation.

7.1. Cluster Network Operator

The Cluster Network Operator implements the network API from the operator.openshift.io API group. The Operator deploys the OVN-Kubernetes network plugin, or the network provider plugin that you selected during cluster installation, by using a daemon set.

Procedure

The Cluster Network Operator is deployed during installation as a Kubernetes Deployment.

  1. Run the following command to view the Deployment status:

    $ oc get -n openshift-network-operator deployment/network-operator

    Example output

    NAME               READY   UP-TO-DATE   AVAILABLE   AGE
    network-operator   1/1     1            1           56m

  2. Run the following command to view the state of the Cluster Network Operator:

    $ oc get clusteroperator/network

    Example output

    NAME      VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
    network   4.16.1     True        False         False      50m

    The following fields provide information about the status of the operator: AVAILABLE, PROGRESSING, and DEGRADED. The AVAILABLE field is True when the Cluster Network Operator reports an available status condition.

7.2. Viewing the cluster network configuration

Every new OpenShift Container Platform installation has a network.config object named cluster.

Procedure

  • Use the oc describe command to view the cluster network configuration:

    $ oc describe network.config/cluster

    Example output

    Name:         cluster
    Namespace:
    Labels:       <none>
    Annotations:  <none>
    API Version:  config.openshift.io/v1
    Kind:         Network
    Metadata:
      Creation Timestamp:  2024-08-08T11:25:56Z
      Generation:          3
      Resource Version:    29821
      UID:                 808dd2be-5077-4ff7-b6bb-21b7110126c7
    Spec: 1
      Cluster Network:
        Cidr:         10.128.0.0/14
        Host Prefix:  23
      External IP:
        Policy:
      Network Diagnostics:
        Mode:
        Source Placement:
        Target Placement:
      Network Type:  OVNKubernetes
      Service Network:
        172.30.0.0/16
    Status: 2
      Cluster Network:
        Cidr:               10.128.0.0/14
        Host Prefix:        23
      Cluster Network MTU:  1360
      Conditions:
        Last Transition Time:  2024-08-08T11:51:50Z
        Message:
        Observed Generation:   0
        Reason:                AsExpected
        Status:                True
        Type:                  NetworkDiagnosticsAvailable
      Network Type:            OVNKubernetes
      Service Network:
        172.30.0.0/16
    Events:  <none>

    1
    The Spec field displays the configured state of the cluster network.
    2
    The Status field displays the current state of the cluster network configuration.

7.3. Viewing Cluster Network Operator status

You can inspect the status and view the details of the Cluster Network Operator using the oc describe command.

Procedure

  • Run the following command to view the status of the Cluster Network Operator:

    $ oc describe clusteroperators/network

7.4. Enabling IP forwarding globally

From OpenShift Container Platform 4.14 onward, global IP address forwarding is disabled on OVN-Kubernetes based cluster deployments to prevent undesirable effects for cluster administrators with nodes acting as routers. However, in some cases where an administrator expects traffic to be forwarded a new configuration parameter ipForwarding is available to allow forwarding of all IP traffic.

To re-enable IP forwarding for all traffic on OVN-Kubernetes managed interfaces set the gatewayConfig.ipForwarding specification in the Cluster Network Operator to Global following this procedure:

Procedure

  1. Backup the existing network configuration by running the following command:

    $ oc get network.operator cluster -o yaml > network-config-backup.yaml
  2. Run the following command to modify the existing network configuration:

    $ oc edit network.operator cluster
    1. Add or update the following block under spec as illustrated in the following example:

      spec:
        clusterNetwork:
        - cidr: 10.128.0.0/14
          hostPrefix: 23
        serviceNetwork:
        - 172.30.0.0/16
        networkType: OVNKubernetes
        clusterNetworkMTU: 8900
        defaultNetwork:
          ovnKubernetesConfig:
            gatewayConfig:
              ipForwarding: Global
    2. Save and close the file.
  3. After applying the changes, the OpenShift Cluster Network Operator (CNO) applies the update across the cluster. You can monitor the progress by using the following command:

    $ oc get clusteroperators network

    The status should eventually report as Available, Progressing=False, and Degraded=False.

  4. Alternatively, you can enable IP forwarding globally by running the following command:

    $ oc patch network.operator cluster -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"ipForwarding": "Global"}}}}}
    Note

    The other valid option for this parameter is Restricted in case you want to revert this change. Restricted is the default and with that setting global IP address forwarding is disabled.

7.5. Viewing Cluster Network Operator logs

You can view Cluster Network Operator logs by using the oc logs command.

Procedure

  • Run the following command to view the logs of the Cluster Network Operator:

    $ oc logs --namespace=openshift-network-operator deployment/network-operator

7.6. Cluster Network Operator configuration

The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster. The CR specifies the fields for the Network API in the operator.openshift.io API group.

The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group:

clusterNetwork
IP address pools from which pod IP addresses are allocated.
serviceNetwork
IP address pool for services.
defaultNetwork.type
Cluster network plugin. OVNKubernetes is the only supported plugin during installation.
Note

After cluster installation, you can only modify the clusterNetwork IP address range.

You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster.

7.6.1. Cluster Network Operator configuration object

The fields for the Cluster Network Operator (CNO) are described in the following table:

Table 7.1. Cluster Network Operator configuration object
FieldTypeDescription

metadata.name

string

The name of the CNO object. This name is always cluster.

spec.clusterNetwork

array

A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example:

spec:
  clusterNetwork:
  - cidr: 10.128.0.0/19
    hostPrefix: 23
  - cidr: 10.128.32.0/19
    hostPrefix: 23

spec.serviceNetwork

array

A block of IP addresses for services. The OVN-Kubernetes network plugin supports only a single IP address block for the service network. For example:

spec:
  serviceNetwork:
  - 172.30.0.0/14

This value is ready-only and inherited from the Network.config.openshift.io object named cluster during cluster installation.

spec.defaultNetwork

object

Configures the network plugin for the cluster network.

spec.kubeProxyConfig

object

The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect.

defaultNetwork object configuration

The values for the defaultNetwork object are defined in the following table:

Table 7.2. defaultNetwork object
FieldTypeDescription

type

string

OVNKubernetes. The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation.

Note

OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. OpenShift SDN is no longer available as an installation choice for new clusters.

ovnKubernetesConfig

object

This object is only valid for the OVN-Kubernetes network plugin.

Configuration for the OVN-Kubernetes network plugin

The following table describes the configuration fields for the OVN-Kubernetes network plugin:

Table 7.3. ovnKubernetesConfig object
FieldTypeDescription

mtu

integer

The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This value is normally configured automatically.

genevePort

integer

The UDP port for the Geneve overlay network.

ipsecConfig

object

An object describing the IPsec mode for the cluster.

ipv4

object

Specifies a configuration object for IPv4 settings.

ipv6

object

Specifies a configuration object for IPv6 settings.

policyAuditConfig

object

Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used.

gatewayConfig

object

Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway.

Note

While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes.

Table 7.4. ovnKubernetesConfig.ipv4 object
FieldTypeDescription

internalTransitSwitchSubnet

string

If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster.

The default value is 100.88.0.0/16.

internalJoinSubnet

string

If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23, then the maximum number of nodes is 2^(23-14)=512.

The default value is 100.64.0.0/16.

Table 7.5. ovnKubernetesConfig.ipv6 object
FieldTypeDescription

internalTransitSwitchSubnet

string

If your existing network infrastructure overlaps with the fd97::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster.

The default value is fd97::/64.

internalJoinSubnet

string

If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster.

The default value is fd98::/64.

Table 7.6. policyAuditConfig object
FieldTypeDescription

rateLimit

integer

The maximum number of messages to generate every second per node. The default value is 20 messages per second.

maxFileSize

integer

The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB.

maxLogFiles

integer

The maximum number of log files that are retained.

destination

string

One of the following additional audit log targets:

libc
The libc syslog() function of the journald process on the host.
udp:<host>:<port>
A syslog server. Replace <host>:<port> with the host and port of the syslog server.
unix:<file>
A Unix Domain Socket file specified by <file>.
null
Do not send the audit logs to any additional target.

syslogFacility

string

The syslog facility, such as kern, as defined by RFC5424. The default value is local0.

Table 7.7. gatewayConfig object
FieldTypeDescription

routingViaHost

boolean

Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false.

This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true, you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack.

ipForwarding

object

You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted. For updates to OpenShift Container Platform 4.14 or later, the default is Global.

ipv4

object

Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses.

ipv6

object

Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses.

Table 7.8. gatewayConfig.ipv4 object
FieldTypeDescription

internalMasqueradeSubnet

string

The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29.

Important

For OpenShift Container Platform 4.17 and later versions, clusters use 169.254.0.0/17 as the default masquerade subnet. For upgraded clusters, there is no change to the default masquerade subnet.

Table 7.9. gatewayConfig.ipv6 object
FieldTypeDescription

internalMasqueradeSubnet

string

The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125.

Important

For OpenShift Container Platform 4.17 and later versions, clusters use fd69::/112 as the default masquerade subnet. For upgraded clusters, there is no change to the default masquerade subnet.

Table 7.10. ipsecConfig object
FieldTypeDescription

mode

string

Specifies the behavior of the IPsec implementation. Must be one of the following values:

  • Disabled: IPsec is not enabled on cluster nodes.
  • External: IPsec is enabled for network traffic with external hosts.
  • Full: IPsec is enabled for pod traffic and network traffic with external hosts.
Note

You can only change the configuration for your cluster network plugin during cluster installation, except for the gatewayConfig field that can be changed at runtime as a postinstallation activity.

Example OVN-Kubernetes configuration with IPSec enabled

defaultNetwork:
  type: OVNKubernetes
  ovnKubernetesConfig:
    mtu: 1400
    genevePort: 6081
      ipsecConfig:
        mode: Full

7.6.2. Cluster Network Operator example configuration

A complete CNO configuration is specified in the following example:

Example Cluster Network Operator object

apiVersion: operator.openshift.io/v1
kind: Network
metadata:
  name: cluster
spec:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  serviceNetwork:
  - 172.30.0.0/16
  networkType: OVNKubernetes
      clusterNetworkMTU: 8900

7.7. Additional resources

Chapter 8. DNS Operator in OpenShift Container Platform

In OpenShift Container Platform, the DNS Operator deploys and manages a CoreDNS instance to provide a name resolution service to pods inside the cluster, enables DNS-based Kubernetes Service discovery, and resolves internal cluster.local names.

8.1. Checking the status of the DNS Operator

The DNS Operator implements the dns API from the operator.openshift.io API group. The Operator deploys CoreDNS using a daemon set, creates a service for the daemon set, and configures the kubelet to instruct pods to use the CoreDNS service IP address for name resolution.

Procedure

The DNS Operator is deployed during installation with a Deployment object.

  1. Use the oc get command to view the deployment status:

    $ oc get -n openshift-dns-operator deployment/dns-operator

    Example output

    NAME           READY     UP-TO-DATE   AVAILABLE   AGE
    dns-operator   1/1       1            1           23h

  2. Use the oc get command to view the state of the DNS Operator:

    $ oc get clusteroperator/dns

    Example output

    NAME      VERSION     AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
    dns       4.1.15-0.11  True        False         False      92m

    AVAILABLE, PROGRESSING, and DEGRADED provide information about the status of the Operator. AVAILABLE is True when at least 1 pod from the CoreDNS daemon set reports an Available status condition, and the DNS service has a cluster IP address.

8.2. View the default DNS

Every new OpenShift Container Platform installation has a dns.operator named default.

Procedure

  1. Use the oc describe command to view the default dns:

    $ oc describe dns.operator/default

    Example output

    Name:         default
    Namespace:
    Labels:       <none>
    Annotations:  <none>
    API Version:  operator.openshift.io/v1
    Kind:         DNS
    ...
    Status:
      Cluster Domain:  cluster.local 1
      Cluster IP:      172.30.0.10 2
    ...

    1
    The Cluster Domain field is the base DNS domain used to construct fully qualified pod and service domain names.
    2
    The Cluster IP is the address pods query for name resolution. The IP is defined as the 10th address in the service CIDR range.
  2. To find the service CIDR range of your cluster, use the oc get command:

    $ oc get networks.config/cluster -o jsonpath='{$.status.serviceNetwork}'

    Example output

    [172.30.0.0/16]

8.3. Using DNS forwarding

You can use DNS forwarding to override the default forwarding configuration in the /etc/resolv.conf file in the following ways:

  • Specify name servers (spec.servers) for every zone. If the forwarded zone is the ingress domain managed by OpenShift Container Platform, then the upstream name server must be authorized for the domain.