Este conteúdo não está disponível no idioma selecionado.
Network security
Securing network traffic and enforcing network policies in OpenShift Container Platform
Abstract
Chapter 1. Understanding network policy APIs Copiar o linkLink copiado para a área de transferência!
Network policy is defined using both cluster-scoped and namespace-scoped network policy APIs. By defining network policy across these different levels, you can create sophisticated network security configurations for your clusters, including full multi-tenant isolation.
1.1. Network policies and their scope Copiar o linkLink copiado para a área de transferência!
- Cluster-scoped network policy
Cluster and network administrators can use the AdminNetworkPolicy to define network policy at the cluster level. The AdminNetworkPolicy feature consists of two APIs: the
AdminNetworkPolicy
API andBaselineAdminNetworkPolicy
API. These APIs are used to set rules that can be applied to the entire cluster, or delegated to the namespace-scopedNetworkPolicy
.Policies defined using the
AdminNetworkPolicy
API take precedence over all other policy types when set to "Allow" or "Deny". However, administrators can also use "Pass" to delegate responsibility for a given policy to the namespace-scopedNetworkPolicy
to allow application developers and namespace tenants to control specific aspects of network security for their projects.Policies defined using the
BaselineAdminNetworkPolicy
API apply only when no other network policy overrides them. When you use theAdminNetworkPolicy
API to delegate an aspect of network policy to the namespace-scopedNetworkPolicy
, you should also define a sensible minimum restriction in theBaselineAdminNetworkPolicy
. This ensures a baseline level of network security at the cluster level in case theNetworkPolicy
for a namespace does not provide sufficient protection.- Namespace-scoped network policy
-
Application developers and namespace tenants can use the
NetworkPolicy
API to define network policy rules for a specific namespace. Rules in theNetworkPolicy
for a namespace take precedence over cluster-wide rules configured using the BaselineAdminNetworkPolicy API, or for a cluster-wide rule that that has been delegated or "passed" from the cluster-wideAdminNetworkPolicy
API.
1.2. How network policy is evaluated and applied Copiar o linkLink copiado para a área de transferência!
When a network connection is established, the network provider (default: OVN-Kubernetes) checks the connection details against network policy rules to determine how to handle the connection.
OVN-Kubernetes evaluates connections against network policy objects in the following order:
Check for matches in the AdminNetworkPolicy tier.
-
If a connection matches an
Allow
orDeny
rule, follow that rule and stop evaluating. -
If a connection matches a
Pass
rule, move to the NetworkPolicy tier.
-
If a connection matches an
Check for matches in the NetworkPolicy tier.
- If a connection matches a rule, follow that rule and stop evaluating.
- If no match is found, move to the BaselineAdminNetworkPolicy tier.
- Follow a matching rule in the BaselineAdminNetworkPolicy tier.
Figure 1.1. Evaluation of network policies by OVN-Kubernetes
1.3. Key differences between AdminNetworkPolicy and NetworkPolicy custom resources Copiar o linkLink copiado para a área de transferência!
The following table explains key differences between the cluster scoped AdminNetworkPolicy
API and the namespace scoped NetworkPolicy
API.
Policy elements | AdminNetworkPolicy | NetworkPolicy |
---|---|---|
Applicable user | Cluster administrator or equivalent | Namespace owners |
Scope | Cluster | Namespace |
Drop traffic |
Supported with an explicit |
Supported via implicit |
Delegate traffic |
Supported with an | Not applicable. |
Allow traffic |
Supported with an explicit | The default action for all rules is to allow. |
Rule precedence within the policy | Depends on the order in which they appear within an ANP. The higher the rule’s position the higher the precedence. | Rules are additive. |
Policy precedence |
Among ANPs the | There is no policy ordering between policies. |
Feature precedence | Evaluated first via tier 1 ACL and BANP is evaluated last via tier 3 ACL. | Enforced after ANP and before BANP, they are evaluated in tier 2 of the ACL. |
Matching pod selection | Can apply different rules across namespaces. | Can apply different rules across pods in single namespace. |
Cluster egress traffic |
Supported via |
Supported through |
Cluster ingress traffic | Not supported. | Not supported. |
Fully qualified domain names (FQDN) peer support | Not supported. | Not supported. |
Namespace selectors |
Supports advanced selection of Namespaces with the use of |
Supports label based namespace selection with the use of |
Chapter 2. Admin network policy Copiar o linkLink copiado para a área de transferência!
2.1. OVN-Kubernetes AdminNetworkPolicy Copiar o linkLink copiado para a área de transferência!
2.1.1. AdminNetworkPolicy Copiar o linkLink copiado para a área de transferência!
An AdminNetworkPolicy
(ANP) is a cluster-scoped custom resource definition (CRD). As a Red Hat OpenShift Service on AWS administrator, you can use ANP to secure your network by creating network policies before creating namespaces. Additionally, you can create network policies on a cluster-scoped level that is non-overridable by NetworkPolicy
objects.
The key difference between AdminNetworkPolicy
and NetworkPolicy
objects are that the former is for administrators and is cluster scoped while the latter is for tenant owners and is namespace scoped.
An ANP allows administrators to specify the following:
-
A
priority
value that determines the order of its evaluation. The lower the value the higher the precedence. - A set of pods that consists of a set of namespaces or namespace on which the policy is applied.
-
A list of ingress rules to be applied for all ingress traffic towards the
subject
. -
A list of egress rules to be applied for all egress traffic from the
subject
.
2.1.1.1. AdminNetworkPolicy example Copiar o linkLink copiado para a área de transferência!
Example 2.1. Example YAML file for an ANP
- 1
- Specify a name for your ANP.
- 2
- The
spec.priority
field supports a maximum of 100 ANPs in the range of values0-99
in a cluster. The lower the value, the higher the precedence because the range is read in order from the lowest to highest value. Because there is no guarantee which policy takes precedence when ANPs are created at the same priority, set ANPs at different priorities so that precedence is deliberate. - 3
- Specify the namespace to apply the ANP resource.
- 4
- ANP have both ingress and egress rules. ANP rules for
spec.ingress
field accepts values ofPass
,Deny
, andAllow
for theaction
field. - 5
- Specify a name for the
ingress.name
. - 6
- Specify
podSelector.matchLabels
to select pods within the namespaces selected bynamespaceSelector.matchLabels
as ingress peers. - 7
- ANPs have both ingress and egress rules. ANP rules for
spec.egress
field accepts values ofPass
,Deny
, andAllow
for theaction
field.
Additional resources
2.1.1.2. AdminNetworkPolicy actions for rules Copiar o linkLink copiado para a área de transferência!
As an administrator, you can set Allow
, Deny
, or Pass
as the action
field for your AdminNetworkPolicy
rules. Because OVN-Kubernetes uses a tiered ACLs to evaluate network traffic rules, ANP allow you to set very strong policy rules that can only be changed by an administrator modifying them, deleting the rule, or overriding them by setting a higher priority rule.
2.1.1.2.1. AdminNetworkPolicy Allow example Copiar o linkLink copiado para a área de transferência!
The following ANP that is defined at priority 9 ensures all ingress traffic is allowed from the monitoring
namespace towards any tenant (all other namespaces) in the cluster.
Example 2.2. Example YAML file for a strong Allow
ANP
This is an example of a strong Allow
ANP because it is non-overridable by all the parties involved. No tenants can block themselves from being monitored using NetworkPolicy
objects and the monitoring tenant also has no say in what it can or cannot monitor.
2.1.1.2.2. AdminNetworkPolicy Deny example Copiar o linkLink copiado para a área de transferência!
The following ANP that is defined at priority 5 ensures all ingress traffic from the monitoring
namespace is blocked towards restricted tenants (namespaces that have labels security: restricted
).
Example 2.3. Example YAML file for a strong Deny
ANP
This is a strong Deny
ANP that is non-overridable by all the parties involved. The restricted tenant owners cannot authorize themselves to allow monitoring traffic, and the infrastructure’s monitoring service cannot scrape anything from these sensitive namespaces.
When combined with the strong Allow
example, the block-monitoring
ANP has a lower priority value giving it higher precedence, which ensures restricted tenants are never monitored.
2.1.1.2.3. AdminNetworkPolicy Pass example Copiar o linkLink copiado para a área de transferência!
The following ANP that is defined at priority 7 ensures all ingress traffic from the monitoring
namespace towards internal infrastructure tenants (namespaces that have labels security: internal
) are passed on to tier 2 of the ACLs and evaluated by the namespaces’ NetworkPolicy
objects.
Example 2.4. Example YAML file for a strong Pass
ANP
This example is a strong Pass
action ANP because it delegates the decision to NetworkPolicy
objects defined by tenant owners. This pass-monitoring
ANP allows all tenant owners grouped at security level internal
to choose if their metrics should be scraped by the infrastructures' monitoring service using namespace scoped NetworkPolicy
objects.
2.2. OVN-Kubernetes BaselineAdminNetworkPolicy Copiar o linkLink copiado para a área de transferência!
2.2.1. BaselineAdminNetworkPolicy Copiar o linkLink copiado para a área de transferência!
BaselineAdminNetworkPolicy
(BANP) is a cluster-scoped custom resource definition (CRD). As a Red Hat OpenShift Service on AWS administrator, you can use BANP to setup and enforce optional baseline network policy rules that are overridable by users using NetworkPolicy
objects if need be. Rule actions for BANP are allow
or deny
.
The BaselineAdminNetworkPolicy
resource is a cluster singleton object that can be used as a guardrail policy incase a passed traffic policy does not match any NetworkPolicy
objects in the cluster. A BANP can also be used as a default security model that provides guardrails that intra-cluster traffic is blocked by default and a user will need to use NetworkPolicy
objects to allow known traffic. You must use default
as the name when creating a BANP resource.
A BANP allows administrators to specify:
-
A
subject
that consists of a set of namespaces or namespace. -
A list of ingress rules to be applied for all ingress traffic towards the
subject
. -
A list of egress rules to be applied for all egress traffic from the
subject
.
2.2.1.1. BaselineAdminNetworkPolicy example Copiar o linkLink copiado para a área de transferência!
Example 2.5. Example YAML file for BANP
- 1
- The policy name must be
default
because BANP is a singleton object. - 2
- Specify the namespace to apply the ANP to.
- 3
- BANP have both ingress and egress rules. BANP rules for
spec.ingress
andspec.egress
fields accepts values ofDeny
andAllow
for theaction
field. - 4
- Specify a name for the
ingress.name
- 5
- Specify the namespaces to select the pods from to apply the BANP resource.
- 6
- Specify
podSelector.matchLabels
name of the pods to apply the BANP resource.
2.2.1.2. BaselineAdminNetworkPolicy Deny example Copiar o linkLink copiado para a área de transferência!
The following BANP singleton ensures that the administrator has set up a default deny policy for all ingress monitoring traffic coming into the tenants at internal
security level. When combined with the "AdminNetworkPolicy Pass example", this deny policy acts as a guardrail policy for all ingress traffic that is passed by the ANP pass-monitoring
policy.
Example 2.6. Example YAML file for a guardrail Deny
rule
You can use an AdminNetworkPolicy
resource with a Pass
value for the action
field in conjunction with the BaselineAdminNetworkPolicy
resource to create a multi-tenant policy. This multi-tenant policy allows one tenant to collect monitoring data on their application while simultaneously not collecting data from a second tenant.
As an administrator, if you apply both the "AdminNetworkPolicy Pass
action example" and the "BaselineAdminNetwork Policy Deny
example", tenants are then left with the ability to choose to create a NetworkPolicy
resource that will be evaluated before the BANP.
For example, Tenant 1 can set up the following NetworkPolicy
resource to monitor ingress traffic:
Example 2.7. Example NetworkPolicy
In this scenario, Tenant 1’s policy would be evaluated after the "AdminNetworkPolicy Pass
action example" and before the "BaselineAdminNetwork Policy Deny
example", which denies all ingress monitoring traffic coming into tenants with security
level internal
. With Tenant 1’s NetworkPolicy
object in place, they will be able to collect data on their application. Tenant 2, however, who does not have any NetworkPolicy
objects in place, will not be able to collect data. As an administrator, you have not by default monitored internal tenants, but instead, you created a BANP that allows tenants to use NetworkPolicy
objects to override the default behavior of your BANP.
Chapter 3. Network policy Copiar o linkLink copiado para a área de transferência!
3.1. About network policy Copiar o linkLink copiado para a área de transferência!
As a developer, you can define network policies that restrict traffic to pods in your cluster.
3.1.1. About network policy Copiar o linkLink copiado para a área de transferência!
By default, all pods in a project are accessible from other pods and network endpoints. To isolate one or more pods in a project, you can create NetworkPolicy
objects in that project to indicate the allowed incoming connections. Project administrators can create and delete NetworkPolicy
objects within their own project.
If a pod is matched by selectors in one or more NetworkPolicy
objects, then the pod will accept only connections that are allowed by at least one of those NetworkPolicy
objects. A pod that is not selected by any NetworkPolicy
objects is fully accessible.
A network policy applies to only the Transmission Control Protocol (TCP), User Datagram Protocol (UDP), Internet Control Message Protocol (ICMP), and Stream Control Transmission Protocol (SCTP) protocols. Other protocols are not affected.
- A network policy does not apply to the host network namespace. Pods with host networking enabled are unaffected by network policy rules. However, pods connecting to the host-networked pods might be affected by the network policy rules.
-
Using the
namespaceSelector
field without thepodSelector
field set to{}
will not includehostNetwork
pods. You must use thepodSelector
set to{}
with thenamespaceSelector
field in order to targethostNetwork
pods when creating network policies. - Network policies cannot block traffic from localhost or from their resident nodes.
The following example NetworkPolicy
objects demonstrate supporting different scenarios:
Deny all traffic:
To make a project deny by default, add a
NetworkPolicy
object that matches all pods but accepts no traffic:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Only allow connections from the Red Hat OpenShift Service on AWS Ingress Controller:
To make a project allow only connections from the Red Hat OpenShift Service on AWS Ingress Controller, add the following
NetworkPolicy
object.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Only accept connections from pods within a project:
ImportantTo allow ingress connections from
hostNetwork
pods in the same namespace, you need to apply theallow-from-hostnetwork
policy together with theallow-same-namespace
policy.To make pods accept connections from other pods in the same project, but reject all other connections from pods in other projects, add the following
NetworkPolicy
object:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Only allow HTTP and HTTPS traffic based on pod labels:
To enable only HTTP and HTTPS access to the pods with a specific label (
role=frontend
in following example), add aNetworkPolicy
object similar to the following:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Accept connections by using both namespace and pod selectors:
To match network traffic by combining namespace and pod selectors, you can use a
NetworkPolicy
object similar to the following:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
NetworkPolicy
objects are additive, which means you can combine multiple NetworkPolicy
objects together to satisfy complex network requirements.
For example, for the NetworkPolicy
objects defined in previous samples, you can define both allow-same-namespace
and allow-http-and-https
policies within the same project. Thus allowing the pods with the label role=frontend
, to accept any connection allowed by each policy. That is, connections on any port from pods in the same namespace, and connections on ports 80
and 443
from pods in any namespace.
3.1.1.1. Using the allow-from-router network policy Copiar o linkLink copiado para a área de transferência!
Use the following NetworkPolicy
to allow external traffic regardless of the router configuration:
- 1
policy-group.network.openshift.io/ingress:""
label supports OVN-Kubernetes.
3.1.1.2. Using the allow-from-hostnetwork network policy Copiar o linkLink copiado para a área de transferência!
Add the following allow-from-hostnetwork
NetworkPolicy
object to direct traffic from the host network pods.
3.1.2. Optimizations for network policy with OVN-Kubernetes network plugin Copiar o linkLink copiado para a área de transferência!
When designing your network policy, refer to the following guidelines:
-
For network policies with the same
spec.podSelector
spec, it is more efficient to use one network policy with multipleingress
oregress
rules, than multiple network policies with subsets ofingress
oregress
rules. Every
ingress
oregress
rule based on thepodSelector
ornamespaceSelector
spec generates the number of OVS flows proportional tonumber of pods selected by network policy + number of pods selected by ingress or egress rule
. Therefore, it is preferable to use thepodSelector
ornamespaceSelector
spec that can select as many pods as you need in one rule, instead of creating individual rules for every pod.For example, the following policy contains two rules:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following policy expresses those same two rules as one:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The same guideline applies to the
spec.podSelector
spec. If you have the sameingress
oregress
rules for different network policies, it might be more efficient to create one network policy with a commonspec.podSelector
spec. For example, the following two policies have different rules:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following network policy expresses those same two rules as one:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can apply this optimization when only multiple selectors are expressed as one. In cases where selectors are based on different labels, it may not be possible to apply this optimization. In those cases, consider applying some new labels for network policy optimization specifically.
3.1.2.1. NetworkPolicy CR and external IPs in OVN-Kubernetes Copiar o linkLink copiado para a área de transferência!
In OVN-Kubernetes, the NetworkPolicy
custom resource (CR) enforces strict isolation rules. If a service is exposed using an external IP, a network policy can block access from other namespaces unless explicitly configured to allow traffic.
To allow access to external IPs across namespaces, create a NetworkPolicy
CR that explicitly permits ingress from the required namespaces and ensures traffic is allowed to the designated service ports. Without allowing traffic to the required ports, access might still be restricted.
Example output
where:
<policy_name>
- Specifies your name for the policy.
<my_namespace>
- Specifies the name of the namespace where the policy is deployed.
For more details, see "About network policy".
3.1.3. Next steps Copiar o linkLink copiado para a área de transferência!
3.2. Creating a network policy Copiar o linkLink copiado para a área de transferência!
As a cluster administrator, you can create a network policy for a namespace.
3.2.1. Example NetworkPolicy object Copiar o linkLink copiado para a área de transferência!
The following annotates an example NetworkPolicy object:
- 1
- The name of the NetworkPolicy object.
- 2
- A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object.
- 3
- A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy.
- 4
- A list of one or more destination ports on which to accept traffic.
3.2.2. Creating a network policy using the CLI Copiar o linkLink copiado para a área de transferência!
To define granular rules describing ingress or egress network traffic allowed for namespaces in your cluster, you can create a network policy.
If you log in with a user with the cluster-admin
role, then you can create a network policy in any namespace in the cluster.
Prerequisites
-
Your cluster uses a network plugin that supports
NetworkPolicy
objects, such as the OVN-Kubernetes network plugin, withmode: NetworkPolicy
set. -
You installed the OpenShift CLI (
oc
). -
You logged in to the cluster with a user with
admin
privileges. - You are working in the namespace that the network policy applies to.
Procedure
Create a policy rule:
Create a
<policy_name>.yaml
file:touch <policy_name>.yaml
$ touch <policy_name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<policy_name>
- Specifies the network policy file name.
Define a network policy in the file that you just created, such as in the following examples:
Deny ingress from all pods in all namespaces
This is a fundamental policy, blocking all cross-pod networking other than cross-pod traffic allowed by the configuration of other Network Policies.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Allow ingress from all pods in the same namespace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Allow ingress traffic to one pod from a particular namespace
This policy allows traffic to pods that have the
pod-a
label from pods running innamespace-y
.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
To create the network policy object, enter the following command. Successful output lists the name of the policy object and the
created
status.oc apply -f <policy_name>.yaml -n <namespace>
$ oc apply -f <policy_name>.yaml -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<policy_name>
- Specifies the network policy file name.
<namespace>
- Optional parameter. If you defined the object in a different namespace than the current namespace, the parameter specifices the namespace.
Successful output lists the name of the policy object and the
created
status.
If you log in to the web console with cluster-admin
privileges, you have a choice of creating a network policy in any namespace in the cluster directly in YAML or from a form in the web console.
3.2.3. Creating a default deny all network policy Copiar o linkLink copiado para a área de transferência!
This policy blocks all cross-pod networking other than network traffic allowed by the configuration of other deployed network policies and traffic between host-networked pods. This procedure enforces a strong deny policy by applying a deny-by-default
policy in the my-project
namespace.
Without configuring a NetworkPolicy
custom resource (CR) that allows traffic communication, the following policy might cause communication problems across your cluster.
Prerequisites
-
Your cluster uses a network plugin that supports
NetworkPolicy
objects, such as the OVN-Kubernetes network plugin, withmode: NetworkPolicy
set. -
You installed the OpenShift CLI (
oc
). -
You logged in to the cluster with a user with
admin
privileges. - You are working in the namespace that the network policy applies to.
Procedure
Create the following YAML that defines a
deny-by-default
policy to deny ingress from all pods in all namespaces. Save the YAML in thedeny-by-default.yaml
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
Specifies the namespace in which to deploy the policy. For example, the `my-project
namespace.- 2
- If this field is empty, the configuration matches all the pods. Therefore, the policy applies to all pods in the
my-project
namespace. - 3
- There are no
ingress
rules specified. This causes incoming traffic to be dropped to all pods.
Apply the policy by entering the following command. Successful output lists the name of the policy object and the
created
status.oc apply -f deny-by-default.yaml
$ oc apply -f deny-by-default.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Successful output lists the name of the policy object and the
created
status.
3.2.4. Creating a network policy to allow traffic from external clients Copiar o linkLink copiado para a área de transferência!
With the deny-by-default
policy in place you can proceed to configure a policy that allows traffic from external clients to a pod with the label app=web
.
If you log in with a user with the cluster-admin
role, then you can create a network policy in any namespace in the cluster.
Follow this procedure to configure a policy that allows external service from the public Internet directly or by using a Load Balancer to access the pod. Traffic is only allowed to a pod with the label app=web
.
Prerequisites
-
Your cluster uses a network plugin that supports
NetworkPolicy
objects, such as the OVN-Kubernetes network plugin, withmode: NetworkPolicy
set. -
You installed the OpenShift CLI (
oc
). -
You logged in to the cluster with a user with
admin
privileges. - You are working in the namespace that the network policy applies to.
Procedure
Create a policy that allows traffic from the public Internet directly or by using a load balancer to access the pod. Save the YAML in the
web-allow-external.yaml
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the policy by entering the following command. Successful output lists the name of the policy object and the
created
status.oc apply -f web-allow-external.yaml
$ oc apply -f web-allow-external.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Successful output lists the name of the policy object and the
created
status. This policy allows traffic from all resources, including external traffic as illustrated in the following diagram:
3.2.5. Creating a network policy allowing traffic to an application from all namespaces Copiar o linkLink copiado para a área de transferência!
If you log in with a user with the cluster-admin
role, then you can create a network policy in any namespace in the cluster.
Follow this procedure to configure a policy that allows traffic from all pods in all namespaces to a particular application.
Prerequisites
-
Your cluster uses a network plugin that supports
NetworkPolicy
objects, such as the OVN-Kubernetes network plugin, withmode: NetworkPolicy
set. -
You installed the OpenShift CLI (
oc
). -
You logged in to the cluster with a user with
admin
privileges. - You are working in the namespace that the network policy applies to.
Procedure
Create a policy that allows traffic from all pods in all namespaces to a particular application. Save the YAML in the
web-allow-all-namespaces.yaml
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteBy default, if you do not specify a
namespaceSelector
parameter in the policy object, no namespaces get selected. This means the policy allows traffic only from the namespace where the network policy deployes.Apply the policy by entering the following command. Successful output lists the name of the policy object and the
created
status.oc apply -f web-allow-all-namespaces.yaml
$ oc apply -f web-allow-all-namespaces.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Successful output lists the name of the policy object and the
created
status.
Verification
Start a web service in the
default
namespace by entering the following command:oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80
$ oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to deploy an
alpine
image in thesecondary
namespace and to start a shell:oc run test-$RANDOM --namespace=secondary --rm -i -t --image=alpine -- sh
$ oc run test-$RANDOM --namespace=secondary --rm -i -t --image=alpine -- sh
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command in the shell and observe that the service allows the request:
wget -qO- --timeout=2 http://web.default
# wget -qO- --timeout=2 http://web.default
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expected output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2.6. Creating a network policy allowing traffic to an application from a namespace Copiar o linkLink copiado para a área de transferência!
If you log in with a user with the cluster-admin
role, then you can create a network policy in any namespace in the cluster.
Follow this procedure to configure a policy that allows traffic to a pod with the label app=web
from a particular namespace. You might want to do this to:
- Restrict traffic to a production database only to namespaces that have production workloads deployed.
- Enable monitoring tools deployed to a particular namespace to scrape metrics from the current namespace.
Prerequisites
-
Your cluster uses a network plugin that supports
NetworkPolicy
objects, such as the OVN-Kubernetes network plugin, withmode: NetworkPolicy
set. -
You installed the OpenShift CLI (
oc
). -
You logged in to the cluster with a user with
admin
privileges. - You are working in the namespace that the network policy applies to.
Procedure
Create a policy that allows traffic from all pods in a particular namespaces with a label
purpose=production
. Save the YAML in theweb-allow-prod.yaml
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the policy by entering the following command. Successful output lists the name of the policy object and the
created
status.oc apply -f web-allow-prod.yaml
$ oc apply -f web-allow-prod.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Successful output lists the name of the policy object and the
created
status.
Verification
Start a web service in the
default
namespace by entering the following command:oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80
$ oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to create the
prod
namespace:oc create namespace prod
$ oc create namespace prod
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to label the
prod
namespace:oc label namespace/prod purpose=production
$ oc label namespace/prod purpose=production
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to create the
dev
namespace:oc create namespace dev
$ oc create namespace dev
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to label the
dev
namespace:oc label namespace/dev purpose=testing
$ oc label namespace/dev purpose=testing
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to deploy an
alpine
image in thedev
namespace and to start a shell:oc run test-$RANDOM --namespace=dev --rm -i -t --image=alpine -- sh
$ oc run test-$RANDOM --namespace=dev --rm -i -t --image=alpine -- sh
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command in the shell and observe the reason for the blocked request. For example, expected output states
wget: download timed out
.wget -qO- --timeout=2 http://web.default
# wget -qO- --timeout=2 http://web.default
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to deploy an
alpine
image in theprod
namespace and start a shell:oc run test-$RANDOM --namespace=prod --rm -i -t --image=alpine -- sh
$ oc run test-$RANDOM --namespace=prod --rm -i -t --image=alpine -- sh
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command in the shell and observe that the request is allowed:
wget -qO- --timeout=2 http://web.default
# wget -qO- --timeout=2 http://web.default
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expected output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2.7. Creating a network policy using OpenShift Cluster Manager Copiar o linkLink copiado para a área de transferência!
To define granular rules describing the ingress or egress network traffic allowed for namespaces in your cluster, you can create a network policy.
Prerequisites
- You logged in to OpenShift Cluster Manager.
- You created an Red Hat OpenShift Service on AWS cluster.
- You configured an identity provider for your cluster.
- You added your user account to the configured identity provider.
- You created a project within your Red Hat OpenShift Service on AWS cluster.
Procedure
- From OpenShift Cluster Manager, click on the cluster you want to access.
- Click Open console to navigate to the OpenShift web console.
- Click on your identity provider and provide your credentials to log in to the cluster.
- From the administrator perspective, under Networking, click NetworkPolicies.
- Click Create NetworkPolicy.
- Provide a name for the policy in the Policy name field.
- Optional: You can provide the label and selector for a specific pod if this policy applies only to one or more specific pods. If you do not select a specific pod, then this policy will be applicable to all pods on the cluster.
- Optional: You can block all ingress and egress traffic by using the Deny all ingress traffic or Deny all egress traffic checkboxes.
- You can also add any combination of ingress and egress rules, allowing you to specify the port, namespace, or IP blocks you want to approve.
Add ingress rules to your policy:
Select Add ingress rule to configure a new rule. This action creates a new Ingress rule row with an Add allowed source drop-down menu that enables you to specify how you want to limit inbound traffic. The drop-down menu offers three options to limit your ingress traffic:
- Allow pods from the same namespace limits traffic to pods within the same namespace. You can specify the pods in a namespace, but leaving this option blank allows all of the traffic from pods in the namespace.
- Allow pods from inside the cluster limits traffic to pods within the same cluster as the policy. You can specify namespaces and pods from which you want to allow inbound traffic. Leaving this option blank allows inbound traffic from all namespaces and pods within this cluster.
- Allow peers by IP block limits traffic from a specified Classless Inter-Domain Routing (CIDR) IP block. You can block certain IPs with the exceptions option. Leaving the CIDR field blank allows all inbound traffic from all external sources.
- You can restrict all of your inbound traffic to a port. If you do not add any ports then all ports are accessible to traffic.
Add egress rules to your network policy:
Select Add egress rule to configure a new rule. This action creates a new Egress rule row with an Add allowed destination"* drop-down menu that enables you to specify how you want to limit outbound traffic. The drop-down menu offers three options to limit your egress traffic:
- Allow pods from the same namespace limits outbound traffic to pods within the same namespace. You can specify the pods in a namespace, but leaving this option blank allows all of the traffic from pods in the namespace.
- Allow pods from inside the cluster limits traffic to pods within the same cluster as the policy. You can specify namespaces and pods from which you want to allow outbound traffic. Leaving this option blank allows outbound traffic from all namespaces and pods within this cluster.
- Allow peers by IP block limits traffic from a specified CIDR IP block. You can block certain IPs with the exceptions option. Leaving the CIDR field blank allows all outbound traffic from all external sources.
- You can restrict all of your outbound traffic to a port. If you do not add any ports then all ports are accessible to traffic.
3.3. Viewing a network policy Copiar o linkLink copiado para a área de transferência!
As a cluster administrator, you can view a network policy for a namespace.
3.3.1. Example NetworkPolicy object Copiar o linkLink copiado para a área de transferência!
The following annotates an example NetworkPolicy object:
- 1
- The name of the NetworkPolicy object.
- 2
- A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object.
- 3
- A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy.
- 4
- A list of one or more destination ports on which to accept traffic.
3.3.2. Viewing network policies using the CLI Copiar o linkLink copiado para a área de transferência!
You can examine the network policies in a namespace.
If you log in with a user with the cluster-admin
role, then you can view any network policy in the cluster.
Prerequisites
-
You installed the OpenShift CLI (
oc
). -
You are logged in to the cluster with a user with
admin
privileges. - You are working in the namespace where the network policy exists.
Procedure
List network policies in a namespace:
To view network policy objects defined in a namespace, enter the following command:
oc get networkpolicy
$ oc get networkpolicy
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: To examine a specific network policy, enter the following command:
oc describe networkpolicy <policy_name> -n <namespace>
$ oc describe networkpolicy <policy_name> -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<policy_name>
- Specifies the name of the network policy to inspect.
<namespace>
- Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
For example:
oc describe networkpolicy allow-same-namespace
$ oc describe networkpolicy allow-same-namespace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Output for
oc describe
commandCopy to Clipboard Copied! Toggle word wrap Toggle overflow
If you log in to the web console with cluster-admin
privileges, you have a choice of viewing a network policy in any namespace in the cluster directly in YAML or from a form in the web console.
3.3.3. Viewing network policies using OpenShift Cluster Manager Copiar o linkLink copiado para a área de transferência!
You can view the configuration details of your network policy in Red Hat OpenShift Cluster Manager.
Prerequisites
- You logged in to OpenShift Cluster Manager.
- You created a Red Hat OpenShift Service on AWS cluster.
- You configured an identity provider for your cluster.
- You added your user account to the configured identity provider.
- You created a network policy.
Procedure
- From the Administrator perspective in the OpenShift Cluster Manager web console, under Networking, click NetworkPolicies.
- Select the desired network policy to view.
- In the Network Policy details page, you can view all of the associated ingress and egress rules.
Select YAML on the network policy details to view the policy configuration in YAML format.
NoteYou can only view the details of these policies. You cannot edit these policies.
3.4. Editing a network policy Copiar o linkLink copiado para a área de transferência!
As a cluster administrator, you can edit an existing network policy for a namespace.
3.4.1. Editing a network policy Copiar o linkLink copiado para a área de transferência!
You can edit a network policy in a namespace.
If you log in with a user with the cluster-admin
role, then you can edit a network policy in any namespace in the cluster.
Prerequisites
-
Your cluster uses a network plugin that supports
NetworkPolicy
objects, such as the OVN-Kubernetes network plugin, withmode: NetworkPolicy
set. -
You installed the OpenShift CLI (
oc
). -
You are logged in to the cluster with a user with
admin
privileges. - You are working in the namespace where the network policy exists.
Procedure
Optional: To list the network policy objects in a namespace, enter the following command:
oc get networkpolicy
$ oc get networkpolicy
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<namespace>
- Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
Edit the network policy object.
If you saved the network policy definition in a file, edit the file and make any necessary changes, and then enter the following command.
oc apply -n <namespace> -f <policy_file>.yaml
$ oc apply -n <namespace> -f <policy_file>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<namespace>
- Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
<policy_file>
- Specifies the name of the file containing the network policy.
If you need to update the network policy object directly, enter the following command:
oc edit networkpolicy <policy_name> -n <namespace>
$ oc edit networkpolicy <policy_name> -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<policy_name>
- Specifies the name of the network policy.
<namespace>
- Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
Confirm that the network policy object is updated.
oc describe networkpolicy <policy_name> -n <namespace>
$ oc describe networkpolicy <policy_name> -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<policy_name>
- Specifies the name of the network policy.
<namespace>
- Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
If you log in to the web console with cluster-admin
privileges, you have a choice of editing a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu.
3.4.2. Example NetworkPolicy object Copiar o linkLink copiado para a área de transferência!
The following annotates an example NetworkPolicy object:
- 1
- The name of the NetworkPolicy object.
- 2
- A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object.
- 3
- A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy.
- 4
- A list of one or more destination ports on which to accept traffic.
3.5. Deleting a network policy Copiar o linkLink copiado para a área de transferência!
As a cluster administrator, you can delete a network policy from a namespace.
3.5.1. Deleting a network policy using the CLI Copiar o linkLink copiado para a área de transferência!
You can delete a network policy in a namespace.
If you log in with a user with the cluster-admin
role, then you can delete any network policy in the cluster.
Prerequisites
-
Your cluster uses a network plugin that supports
NetworkPolicy
objects, such as the OVN-Kubernetes network plugin, withmode: NetworkPolicy
set. -
You installed the OpenShift CLI (
oc
). -
You logged in to the cluster with a user with
admin
privileges. - You are working in the namespace where the network policy exists.
Procedure
To delete a network policy object, enter the following command. Successful output lists the name of the policy object and the
deleted
status.oc delete networkpolicy <policy_name> -n <namespace>
$ oc delete networkpolicy <policy_name> -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<policy_name>
- Specifies the name of the network policy.
<namespace>
- Optional parameter. If you defined the object in a different namespace than the current namespace, the parameter specifices the namespace.
Successful output lists the name of the policy object and the
deleted
status.
If you log in to the web console with cluster-admin
privileges, you have a choice of deleting a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu.
3.5.2. Deleting a network policy using OpenShift Cluster Manager Copiar o linkLink copiado para a área de transferência!
You can delete a network policy in a namespace.
Prerequisites
- You logged in to OpenShift Cluster Manager.
- You created a Red Hat OpenShift Service on AWS cluster.
- You configured an identity provider for your cluster.
- You added your user account to the configured identity provider.
Procedure
- From the Administrator perspective in the OpenShift Cluster Manager web console, under Networking, click NetworkPolicies.
Use one of the following methods for deleting your network policy:
Delete the policy from the Network Policies table:
- From the Network Policies table, select the stack menu on the row of the network policy you want to delete and then, click Delete NetworkPolicy.
Delete the policy using the Actions drop-down menu from the individual network policy details:
- Click on Actions drop-down menu for your network policy.
- Select Delete NetworkPolicy from the menu.
3.6. Defining a default network policy for projects Copiar o linkLink copiado para a área de transferência!
As a cluster administrator, you can modify the new project template to automatically include network policies when you create a new project. If you do not yet have a customized template for new projects, you must first create one.
3.6.1. Modifying the template for new projects Copiar o linkLink copiado para a área de transferência!
As a cluster administrator, you can modify the default project template so that new projects are created using your custom requirements.
To create your own custom project template:
Prerequisites
-
You have access to a Red Hat OpenShift Service on AWS cluster using an account with
dedicated-admin
permissions.
Procedure
-
Log in as a user with
cluster-admin
privileges. Generate the default project template:
oc adm create-bootstrap-project-template -o yaml > template.yaml
$ oc adm create-bootstrap-project-template -o yaml > template.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Use a text editor to modify the generated
template.yaml
file by adding objects or modifying existing objects. The project template must be created in the
openshift-config
namespace. Load your modified template:oc create -f template.yaml -n openshift-config
$ oc create -f template.yaml -n openshift-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the project configuration resource using the web console or CLI.
Using the web console:
- Navigate to the Administration → Cluster Settings page.
- Click Configuration to view all configuration resources.
- Find the entry for Project and click Edit YAML.
Using the CLI:
Edit the
project.config.openshift.io/cluster
resource:oc edit project.config.openshift.io/cluster
$ oc edit project.config.openshift.io/cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Update the
spec
section to include theprojectRequestTemplate
andname
parameters, and set the name of your uploaded project template. The default name isproject-request
.Project configuration resource with custom project template
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - After you save your changes, create a new project to verify that your changes were successfully applied.
3.6.2. Adding network policies to the new project template Copiar o linkLink copiado para a área de transferência!
As a cluster administrator, you can add network policies to the default template for new projects. Red Hat OpenShift Service on AWS will automatically create all the NetworkPolicy
objects specified in the template in the project.
Prerequisites
-
Your cluster uses a default container network interface (CNI) network plugin that supports
NetworkPolicy
objects, such as the OVN-Kubernetes. -
You installed the OpenShift CLI (
oc
). -
You must log in to the cluster with a user with
cluster-admin
privileges. - You must have created a custom default project template for new projects.
Procedure
Edit the default template for a new project by running the following command:
oc edit template <project_template> -n openshift-config
$ oc edit template <project_template> -n openshift-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<project_template>
with the name of the default template that you configured for your cluster. The default template name isproject-request
.In the template, add each
NetworkPolicy
object as an element to theobjects
parameter. Theobjects
parameter accepts a collection of one or more objects.In the following example, the
objects
parameter collection includes severalNetworkPolicy
objects.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Create a new project and confirm the successful creation of your network policy objects.
Create a new project:
oc new-project <project>
$ oc new-project <project>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<project>
with the name for the project you are creating.
Confirm that the network policy objects in the new project template exist in the new project:
oc get networkpolicy
$ oc get networkpolicy
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expected output:
NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s
NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.7. Configuring multitenant isolation with network policy Copiar o linkLink copiado para a área de transferência!
As a cluster administrator, you can configure your network policies to provide multitenant network isolation.
Configuring network policies as described in this section provides network isolation similar to the multitenant mode of OpenShift SDN in previous versions of Red Hat OpenShift Service on AWS.
3.7.1. Configuring multitenant isolation by using network policy Copiar o linkLink copiado para a área de transferência!
You can configure your project to isolate it from pods and services in other project namespaces.
Prerequisites
-
Your cluster uses a network plugin that supports
NetworkPolicy
objects, such as the OVN-Kubernetes network plugin, withmode: NetworkPolicy
set. -
You installed the OpenShift CLI (
oc
). -
You are logged in to the cluster with a user with
admin
privileges.
Procedure
Create the following
NetworkPolicy
objects:A policy named
allow-from-openshift-ingress
.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Notepolicy-group.network.openshift.io/ingress: ""
is the preferred namespace selector label for OVN-Kubernetes.A policy named
allow-from-openshift-monitoring
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow A policy named
allow-same-namespace
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow A policy named
allow-from-kube-apiserver-operator
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more details, see New
kube-apiserver-operator
webhook controller validating health of webhook.
Optional: To confirm that the network policies exist in your current project, enter the following command:
oc describe networkpolicy
$ oc describe networkpolicy
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 4. Network verification for Red Hat OpenShift Service on AWS clusters Copiar o linkLink copiado para a área de transferência!
Network verification checks run automatically when you deploy a Red Hat OpenShift Service on AWS cluster into an existing Virtual Private Cloud (VPC) or create an additional machine pool with a subnet that is new to your cluster. The checks validate your network configuration and highlight errors, enabling you to resolve configuration issues before cluster deployment.
You can also run the network verification checks manually to validate the configuration for an existing cluster.
4.1. Understanding network verification for Red Hat OpenShift Service on AWS clusters Copiar o linkLink copiado para a área de transferência!
When you deploy a Red Hat OpenShift Service on AWS cluster into an existing Virtual Private Cloud (VPC) or create an additional machine pool with a subnet that is new to your cluster, network verification runs automatically. This helps you identify and resolve configuration issues before cluster deployment.
When you prepare to install your cluster by using Red Hat OpenShift Cluster Manager, the automatic checks run after you input a subnet into a subnet ID field on the Virtual Private Cloud (VPC) subnet settings page. If you create your cluster by using the ROSA CLI (rosa
) with the interactive mode, the checks run after you provide the required VPC network information. If you use the CLI without the interactive mode, the checks begin immediately before cluster creation.
When you add a machine pool with a subnet that is new to your cluster, the automatic network verification checks the subnet to ensure that network connectivity is available before the machine pool is provisioned.
After automatic network verification completes, a record is sent to the service log. The record provides the results of the verification check, including any network configuration errors. You can resolve the identified issues before a deployment and the deployment has a greater chance of success.
You can also run the network verification manually for an existing cluster. This enables you to verify the network configuration for your cluster after making configuration changes. For steps to run the network verification checks manually, see Running the network verification manually.
4.2. Scope of the network verification checks Copiar o linkLink copiado para a área de transferência!
The network verification includes checks for each of the following requirements:
- The parent Virtual Private Cloud (VPC) exists.
- All specified subnets belong to the VPC.
-
The VPC has
enableDnsSupport
enabled. -
The VPC has
enableDnsHostnames
enabled.
4.3. Automatic network verification bypassing Copiar o linkLink copiado para a área de transferência!
You can bypass the automatic network verification if you want to deploy cluster with known network configuration issues into an existing Virtual Private Cloud (VPC).
If you bypass the network verification when you create a cluster, the cluster has a limited support status. After installation, you can resolve the issues and then manually run the network verification. The limited support status is removed after the verification succeeds.
When you install a cluster into an existing VPC by using Red Hat OpenShift Cluster Manager, you can bypass the automatic verification by selecting Bypass network verification on the Virtual Private Cloud (VPC) subnet settings page.
4.3.1. Running the network verification manually using the CLI Copiar o linkLink copiado para a área de transferência!
You can manually run the network verification checks for an existing Red Hat OpenShift Service on AWS cluster by using the ROSA CLI (rosa
).
When you run the network verification, you can specify a set of VPC subnet IDs or a cluster name.
Prerequisites
-
You have installed and configured the latest ROSA CLI (
rosa
) on your installation host. - You have an existing Red Hat OpenShift Service on AWS cluster.
- You are the cluster owner or you have the cluster editor role.
Procedure
Verify the network configuration by using one of the following methods:
Verify the network configuration by specifying the cluster name. The subnet IDs are automatically detected:
rosa verify network --cluster <cluster_name>
$ rosa verify network --cluster <cluster_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<cluster_name>
with the name of your cluster.
Example output
I: Verifying the following subnet IDs are configured correctly: [subnet-03146b9b52b6024cb subnet-03146b9b52b2034cc] I: subnet-03146b9b52b6024cb: pending I: subnet-03146b9b52b2034cc: passed I: Run the following command to wait for verification to all subnets to complete: rosa verify network --watch --status-only --region us-east-1 --subnet-ids subnet-03146b9b52b6024cb,subnet-03146b9b52b2034cc
I: Verifying the following subnet IDs are configured correctly: [subnet-03146b9b52b6024cb subnet-03146b9b52b2034cc] I: subnet-03146b9b52b6024cb: pending I: subnet-03146b9b52b2034cc: passed I: Run the following command to wait for verification to all subnets to complete: rosa verify network --watch --status-only --region us-east-1 --subnet-ids subnet-03146b9b52b6024cb,subnet-03146b9b52b2034cc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that verification to all subnets has been completed:
rosa verify network --watch \ --status-only \ --region <region_name> \ --subnet-ids subnet-03146b9b52b6024cb,subnet-03146b9b52b2034cc
$ rosa verify network --watch \
1 --status-only \
2 --region <region_name> \
3 --subnet-ids subnet-03146b9b52b6024cb,subnet-03146b9b52b2034cc
4 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
watch
flag causes the command to complete after all the subnets under test are in a failed or passed state. - 2
- The
status-only
flag does not trigger a run of network verification but returns the current state, for example,subnet-123 (verification still in-progress)
. By default, without this option, a call to this command always triggers a verification of the specified subnets. - 3
- Use a specific AWS region that overrides the AWS_REGION environment variable.
- 4
- Enter a list of subnet IDs separated by commas to verify. If any of the subnets do not exist, the error message
Network verification for subnet 'subnet-<subnet_number> not found
displays and no subnets are checked.
Example output
I: Checking the status of the following subnet IDs: [subnet-03146b9b52b6024cb subnet-03146b9b52b2034cc] I: subnet-03146b9b52b6024cb: passed I: subnet-03146b9b52b2034cc: passed
I: Checking the status of the following subnet IDs: [subnet-03146b9b52b6024cb subnet-03146b9b52b2034cc] I: subnet-03146b9b52b6024cb: passed I: subnet-03146b9b52b2034cc: passed
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipTo output the full list of verification tests, you can include the
--debug
argument when you run therosa verify network
command.
Verify the network configuration by specifying the VPC subnets IDs. Replace
<region_name>
with your AWS region and<AWS_account_ID>
with your AWS account ID:rosa verify network --subnet-ids 03146b9b52b6024cb,subnet-03146b9b52b2034cc --region <region_name> --role-arn arn:aws:iam::<AWS_account_ID>:role/my-Installer-Role
$ rosa verify network --subnet-ids 03146b9b52b6024cb,subnet-03146b9b52b2034cc --region <region_name> --role-arn arn:aws:iam::<AWS_account_ID>:role/my-Installer-Role
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
I: Verifying the following subnet IDs are configured correctly: [subnet-03146b9b52b6024cb subnet-03146b9b52b2034cc] I: subnet-03146b9b52b6024cb: pending I: subnet-03146b9b52b2034cc: passed I: Run the following command to wait for verification to all subnets to complete: rosa verify network --watch --status-only --region us-east-1 --subnet-ids subnet-03146b9b52b6024cb,subnet-03146b9b52b2034cc
I: Verifying the following subnet IDs are configured correctly: [subnet-03146b9b52b6024cb subnet-03146b9b52b2034cc] I: subnet-03146b9b52b6024cb: pending I: subnet-03146b9b52b2034cc: passed I: Run the following command to wait for verification to all subnets to complete: rosa verify network --watch --status-only --region us-east-1 --subnet-ids subnet-03146b9b52b6024cb,subnet-03146b9b52b2034cc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that verification to all subnets has been completed:
rosa verify network --watch --status-only --region us-east-1 --subnet-ids subnet-03146b9b52b6024cb,subnet-03146b9b52b2034cc
$ rosa verify network --watch --status-only --region us-east-1 --subnet-ids subnet-03146b9b52b6024cb,subnet-03146b9b52b2034cc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
I: Checking the status of the following subnet IDs: [subnet-03146b9b52b6024cb subnet-03146b9b52b2034cc] I: subnet-03146b9b52b6024cb: passed I: subnet-03146b9b52b2034cc: passed
I: Checking the status of the following subnet IDs: [subnet-03146b9b52b6024cb subnet-03146b9b52b2034cc] I: subnet-03146b9b52b6024cb: passed I: subnet-03146b9b52b2034cc: passed
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 5. Audit logging for network security Copiar o linkLink copiado para a área de transferência!
The OVN-Kubernetes network plugin uses Open Virtual Network (OVN) access control lists (ACLs) to manage AdminNetworkPolicy
, BaselineAdminNetworkPolicy
, NetworkPolicy
, and EgressFirewall
objects. Audit logging exposes allow
and deny
ACL events for NetworkPolicy
, EgressFirewall
and BaselineAdminNetworkPolicy
custom resources (CR). Logging also exposes allow
, deny
, and pass
ACL events for AdminNetworkPolicy
(ANP) CR.
Audit logging is available for only the OVN-Kubernetes network plugin.
5.1. Audit configuration Copiar o linkLink copiado para a área de transferência!
The configuration for audit logging is specified as part of the OVN-Kubernetes cluster network provider configuration. The following YAML illustrates the default values for the audit logging:
Audit logging configuration
The following table describes the configuration fields for audit logging.
Field | Type | Description |
---|---|---|
| integer |
The maximum number of messages to generate every second per node. The default value is |
| integer |
The maximum size for the audit log in bytes. The default value is |
| integer | The maximum number of log files that are retained. |
| string | One of the following additional audit log targets:
|
| string |
The syslog facility, such as |
5.2. Audit logging Copiar o linkLink copiado para a área de transferência!
You can configure the destination for audit logs, such as a syslog server or a UNIX domain socket. Regardless of any additional configuration, an audit log is always saved to /var/log/ovn/acl-audit-log.log
on each OVN-Kubernetes pod in the cluster.
You can enable audit logging for each namespace by annotating each namespace configuration with a k8s.ovn.org/acl-logging
section. In the k8s.ovn.org/acl-logging
section, you must specify allow
, deny
, or both values to enable audit logging for a namespace.
A network policy does not support setting the Pass
action set as a rule.
The ACL-logging implementation logs access control list (ACL) events for a network. You can view these logs to analyze any potential security issues.
Example namespace annotation
To view the default ACL logging configuration values, see the policyAuditConfig
object in the cluster-network-03-config.yml
file. If required, you can change the ACL logging configuration values for log file parameters in this file.
The logging message format is compatible with syslog as defined by RFC5424. The syslog facility is configurable and defaults to local0
. The following example shows key parameters and their values outputted in a log message:
Example logging message that outputs parameters and their values
<timestamp>|<message_serial>|acl_log(ovn_pinctrl0)|<severity>|name="<acl_name>", verdict="<verdict>", severity="<severity>", direction="<direction>": <flow>
<timestamp>|<message_serial>|acl_log(ovn_pinctrl0)|<severity>|name="<acl_name>", verdict="<verdict>", severity="<severity>", direction="<direction>": <flow>
Where:
-
<timestamp>
states the time and date for the creation of a log message. -
<message_serial>
lists the serial number for a log message. -
acl_log(ovn_pinctrl0)
is a literal string that prints the location of the log message in the OVN-Kubernetes plugin. -
<severity>
sets the severity level for a log message. If you enable audit logging that supportsallow
anddeny
tasks then two severity levels show in the log message output. -
<name>
states the name of the ACL-logging implementation in the OVN Network Bridging Database (nbdb
) that was created by the network policy. -
<verdict>
can be eitherallow
ordrop
. -
<direction>
can be eitherto-lport
orfrom-lport
to indicate that the policy was applied to traffic going to or away from a pod. -
<flow>
shows packet information in a format equivalent to theOpenFlow
protocol. This parameter comprises Open vSwitch (OVS) fields.
The following example shows OVS fields that the flow
parameter uses to extract packet information from system memory:
Example of OVS fields used by the flow
parameter to extract packet information
<proto>,vlan_tci=0x0000,dl_src=<src_mac>,dl_dst=<source_mac>,nw_src=<source_ip>,nw_dst=<target_ip>,nw_tos=<tos_dscp>,nw_ecn=<tos_ecn>,nw_ttl=<ip_ttl>,nw_frag=<fragment>,tp_src=<tcp_src_port>,tp_dst=<tcp_dst_port>,tcp_flags=<tcp_flags>
<proto>,vlan_tci=0x0000,dl_src=<src_mac>,dl_dst=<source_mac>,nw_src=<source_ip>,nw_dst=<target_ip>,nw_tos=<tos_dscp>,nw_ecn=<tos_ecn>,nw_ttl=<ip_ttl>,nw_frag=<fragment>,tp_src=<tcp_src_port>,tp_dst=<tcp_dst_port>,tcp_flags=<tcp_flags>
Where:
-
<proto>
states the protocol. Valid values aretcp
andudp
. -
vlan_tci=0x0000
states the VLAN header as0
because a VLAN ID is not set for internal pod network traffic. -
<src_mac>
specifies the source for the Media Access Control (MAC) address. -
<source_mac>
specifies the destination for the MAC address. -
<source_ip>
lists the source IP address -
<target_ip>
lists the target IP address. -
<tos_dscp>
states Differentiated Services Code Point (DSCP) values to classify and prioritize certain network traffic over other traffic. -
<tos_ecn>
states Explicit Congestion Notification (ECN) values that indicate any congested traffic in your network. -
<ip_ttl>
states the Time To Live (TTP) information for an packet. -
<fragment>
specifies what type of IP fragments or IP non-fragments to match. -
<tcp_src_port>
shows the source for the port for TCP and UDP protocols. -
<tcp_dst_port>
lists the destination port for TCP and UDP protocols. -
<tcp_flags>
supports numerous flags such asSYN
,ACK
,PSH
and so on. If you need to set multiple values then each value is separated by a vertical bar (|
). The UDP protocol does not support this parameter.
For more information about the previous field descriptions, go to the OVS manual page for ovs-fields
.
Example ACL deny log entry for a network policy
2023-11-02T16:28:54.139Z|00004|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:55.187Z|00005|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:57.235Z|00006|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn
2023-11-02T16:28:54.139Z|00004|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn
2023-11-02T16:28:55.187Z|00005|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn
2023-11-02T16:28:57.235Z|00006|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn
The following table describes namespace annotation values:
Field | Description |
---|---|
|
Blocks namespace access to any traffic that matches an ACL rule with the |
|
Permits namespace access to any traffic that matches an ACL rule with the |
|
A |
5.3. AdminNetworkPolicy audit logging Copiar o linkLink copiado para a área de transferência!
Audit logging is enabled per AdminNetworkPolicy
CR by annotating an ANP policy with the k8s.ovn.org/acl-logging
key such as in the following example:
Example 5.1. Example of annotation for AdminNetworkPolicy
CR
Logs are generated whenever a specific OVN ACL is hit and meets the action criteria set in your logging annotation. For example, an event in which any of the namespaces with the label tenant: product-development
accesses the namespaces with the label tenant: backend-storage
, a log is generated.
ACL logging is limited to 60 characters. If your ANP name
field is long, the rest of the log will be truncated.
The following is a direction index for the examples log entries that follow:
Direction | Rule |
---|---|
Ingress |
|
Egress |
|
Example 5.2. Example ACL log entry for Allow
action of the AdminNetworkPolicy
named anp-tenant-log
with Ingress:0
and Egress:0
2024-06-10T16:27:45.194Z|00052|acl_log(ovn_pinctrl0)|INFO|name="ANP:anp-tenant-log:Ingress:0", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:1a,dl_dst=0a:58:0a:80:02:19,nw_src=10.128.2.26,nw_dst=10.128.2.25,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=57814,tp_dst=8080,tcp_flags=syn 2024-06-10T16:28:23.130Z|00059|acl_log(ovn_pinctrl0)|INFO|name="ANP:anp-tenant-log:Ingress:0", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:18,dl_dst=0a:58:0a:80:02:19,nw_src=10.128.2.24,nw_dst=10.128.2.25,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=38620,tp_dst=8080,tcp_flags=ack 2024-06-10T16:28:38.293Z|00069|acl_log(ovn_pinctrl0)|INFO|name="ANP:anp-tenant-log:Egress:0", verdict=allow, severity=alert, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:19,dl_dst=0a:58:0a:80:02:1a,nw_src=10.128.2.25,nw_dst=10.128.2.26,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=47566,tp_dst=8080,tcp_flags=fin|ack=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=55704,tp_dst=8080,tcp_flags=ack
2024-06-10T16:27:45.194Z|00052|acl_log(ovn_pinctrl0)|INFO|name="ANP:anp-tenant-log:Ingress:0", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:1a,dl_dst=0a:58:0a:80:02:19,nw_src=10.128.2.26,nw_dst=10.128.2.25,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=57814,tp_dst=8080,tcp_flags=syn
2024-06-10T16:28:23.130Z|00059|acl_log(ovn_pinctrl0)|INFO|name="ANP:anp-tenant-log:Ingress:0", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:18,dl_dst=0a:58:0a:80:02:19,nw_src=10.128.2.24,nw_dst=10.128.2.25,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=38620,tp_dst=8080,tcp_flags=ack
2024-06-10T16:28:38.293Z|00069|acl_log(ovn_pinctrl0)|INFO|name="ANP:anp-tenant-log:Egress:0", verdict=allow, severity=alert, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:19,dl_dst=0a:58:0a:80:02:1a,nw_src=10.128.2.25,nw_dst=10.128.2.26,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=47566,tp_dst=8080,tcp_flags=fin|ack=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=55704,tp_dst=8080,tcp_flags=ack
Example 5.3. Example ACL log entry for Pass
action of the AdminNetworkPolicy
named anp-tenant-log
with Ingress:1
and Egress:1
2024-06-10T16:33:12.019Z|00075|acl_log(ovn_pinctrl0)|INFO|name="ANP:anp-tenant-log:Ingress:1", verdict=pass, severity=warning, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:1b,dl_dst=0a:58:0a:80:02:19,nw_src=10.128.2.27,nw_dst=10.128.2.25,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=37394,tp_dst=8080,tcp_flags=ack 2024-06-10T16:35:04.209Z|00081|acl_log(ovn_pinctrl0)|INFO|name="ANP:anp-tenant-log:Egress:1", verdict=pass, severity=warning, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:19,dl_dst=0a:58:0a:80:02:1b,nw_src=10.128.2.25,nw_dst=10.128.2.27,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=34018,tp_dst=8080,tcp_flags=ack
2024-06-10T16:33:12.019Z|00075|acl_log(ovn_pinctrl0)|INFO|name="ANP:anp-tenant-log:Ingress:1", verdict=pass, severity=warning, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:1b,dl_dst=0a:58:0a:80:02:19,nw_src=10.128.2.27,nw_dst=10.128.2.25,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=37394,tp_dst=8080,tcp_flags=ack
2024-06-10T16:35:04.209Z|00081|acl_log(ovn_pinctrl0)|INFO|name="ANP:anp-tenant-log:Egress:1", verdict=pass, severity=warning, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:19,dl_dst=0a:58:0a:80:02:1b,nw_src=10.128.2.25,nw_dst=10.128.2.27,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=34018,tp_dst=8080,tcp_flags=ack
Example 5.4. Example ACL log entry for Deny
action of the AdminNetworkPolicy
named anp-tenant-log
with Egress:2
and Ingress2
2024-06-10T16:43:05.287Z|00087|acl_log(ovn_pinctrl0)|INFO|name="ANP:anp-tenant-log:Egress:2", verdict=drop, severity=alert, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:19,dl_dst=0a:58:0a:80:02:18,nw_src=10.128.2.25,nw_dst=10.128.2.24,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=51598,tp_dst=8080,tcp_flags=syn 2024-06-10T16:44:43.591Z|00090|acl_log(ovn_pinctrl0)|INFO|name="ANP:anp-tenant-log:Ingress:2", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:1c,dl_dst=0a:58:0a:80:02:19,nw_src=10.128.2.28,nw_dst=10.128.2.25,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=33774,tp_dst=8080,tcp_flags=syn
2024-06-10T16:43:05.287Z|00087|acl_log(ovn_pinctrl0)|INFO|name="ANP:anp-tenant-log:Egress:2", verdict=drop, severity=alert, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:19,dl_dst=0a:58:0a:80:02:18,nw_src=10.128.2.25,nw_dst=10.128.2.24,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=51598,tp_dst=8080,tcp_flags=syn
2024-06-10T16:44:43.591Z|00090|acl_log(ovn_pinctrl0)|INFO|name="ANP:anp-tenant-log:Ingress:2", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:1c,dl_dst=0a:58:0a:80:02:19,nw_src=10.128.2.28,nw_dst=10.128.2.25,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=33774,tp_dst=8080,tcp_flags=syn
The following table describes ANP annotation:
Annotation | Value |
---|---|
|
You must specify at least one of
|
5.4. BaselineAdminNetworkPolicy audit logging Copiar o linkLink copiado para a área de transferência!
Audit logging is enabled in the BaselineAdminNetworkPolicy
CR by annotating an BANP policy with the k8s.ovn.org/acl-logging
key such as in the following example:
Example 5.5. Example of annotation for BaselineAdminNetworkPolicy
CR
In the example, an event in which any of the namespaces with the label tenant: dns
accesses the namespaces with the label tenant: workloads
, a log is generated.
The following is a direction index for the examples log entries that follow:
Direction | Rule |
---|---|
Ingress |
|
Egress |
|
Example 5.6. Example ACL allow log entry for Allow
action of default
BANP with Ingress:0
Example 5.7. Example ACL allow log entry for Allow
action of default
BANP with Egress:0
and Ingress:1
The following table describes BANP annotation:
Annotation | Value |
---|---|
|
You must specify at least one of
|
5.5. Configuring egress firewall and network policy auditing for a cluster Copiar o linkLink copiado para a área de transferência!
As a cluster administrator, you can customize audit logging for your cluster.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in to the cluster with a user with
cluster-admin
privileges.
Procedure
To customize the audit logging configuration, enter the following command:
oc edit network.operator.openshift.io/cluster
$ oc edit network.operator.openshift.io/cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipYou can also customize and apply the following YAML to configure audit logging:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To create a namespace with network policies complete the following steps:
Create a namespace for verification:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Successful output lists the namespace with the network policy and the
created
status.Create network policies for the namespace:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
networkpolicy.networking.k8s.io/deny-all created networkpolicy.networking.k8s.io/allow-from-same-namespace created
networkpolicy.networking.k8s.io/deny-all created networkpolicy.networking.k8s.io/allow-from-same-namespace created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a pod for source traffic in the
default
namespace:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create two pods in the
verify-audit-logging
namespace:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Successful output lists the two pods, such as
pod/client
andpod/server
, and thecreated
status.To generate traffic and produce network policy audit log entries, complete the following steps:
Obtain the IP address for pod named
server
in theverify-audit-logging
namespace:POD_IP=$(oc get pods server -n verify-audit-logging -o jsonpath='{.status.podIP}')
$ POD_IP=$(oc get pods server -n verify-audit-logging -o jsonpath='{.status.podIP}')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ping the IP address from an earlier command from the pod named
client
in thedefault
namespace and confirm the all packets are dropped:oc exec -it client -n default -- /bin/ping -c 2 $POD_IP
$ oc exec -it client -n default -- /bin/ping -c 2 $POD_IP
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
PING 10.128.2.55 (10.128.2.55) 56(84) bytes of data. --- 10.128.2.55 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 2041ms
PING 10.128.2.55 (10.128.2.55) 56(84) bytes of data. --- 10.128.2.55 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 2041ms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow From the client pod in the
verify-audit-logging
namespace, ping the IP address stored in thePOD_IP shell
environment variable and confirm the system allows all packets.oc exec -it client -n verify-audit-logging -- /bin/ping -c 2 $POD_IP
$ oc exec -it client -n verify-audit-logging -- /bin/ping -c 2 $POD_IP
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Display the latest entries in the network policy audit log:
for pod in $(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print $1 }') ; do oc exec -it $pod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done
$ for pod in $(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print $1 }') ; do oc exec -it $pod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.6. Enabling egress firewall and network policy audit logging for a namespace Copiar o linkLink copiado para a área de transferência!
As a cluster administrator, you can enable audit logging for a namespace.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in to the cluster with a user with
cluster-admin
privileges.
Procedure
To enable audit logging for a namespace, enter the following command:
oc annotate namespace <namespace> \ k8s.ovn.org/acl-logging='{ "deny": "alert", "allow": "notice" }'
$ oc annotate namespace <namespace> \ k8s.ovn.org/acl-logging='{ "deny": "alert", "allow": "notice" }'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<namespace>
- Specifies the name of the namespace.
TipYou can also apply the following YAML to enable audit logging:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Successful output lists the audit logging name and the
annotated
status.
Verification
Display the latest entries in the audit log:
for pod in $(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print $1 }') ; do oc exec -it $pod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done
$ for pod in $(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print $1 }') ; do oc exec -it $pod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
2023-11-02T16:49:57.909Z|00028|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Egress:0", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:57.909Z|00029|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Ingress:0", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00030|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Egress:0", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00031|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Ingress:0", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0
2023-11-02T16:49:57.909Z|00028|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Egress:0", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:57.909Z|00029|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Ingress:0", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00030|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Egress:0", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00031|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Ingress:0", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.7. Disabling egress firewall and network policy audit logging for a namespace Copiar o linkLink copiado para a área de transferência!
As a cluster administrator, you can disable audit logging for a namespace.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in to the cluster with a user with
cluster-admin
privileges.
Procedure
To disable audit logging for a namespace, enter the following command:
oc annotate --overwrite namespace <namespace> k8s.ovn.org/acl-logging-
$ oc annotate --overwrite namespace <namespace> k8s.ovn.org/acl-logging-
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<namespace>
- Specifies the name of the namespace.
TipYou can also apply the following YAML to disable audit logging:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Successful output lists the audit logging name and the
annotated
status.
Legal Notice
Copiar o linkLink copiado para a área de transferência!
Copyright © 2025 Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.