Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 23. OVN-Kubernetes network plugin
23.1. About the OVN-Kubernetes network plugin Link kopierenLink in die Zwischenablage kopiert!
The OpenShift Container Platform cluster uses a virtualized network for pod and service networks.
Part of Red Hat OpenShift Networking, the OVN-Kubernetes network plugin is the default network provider for OpenShift Container Platform. OVN-Kubernetes is based on Open Virtual Network (OVN) and provides an overlay-based networking implementation. A cluster that uses the OVN-Kubernetes plugin also runs Open vSwitch (OVS) on each node. OVN configures OVS on each node to implement the declared network configuration.
OVN-Kubernetes is the default networking solution for OpenShift Container Platform and single-node OpenShift deployments.
OVN-Kubernetes, which arose from the OVS project, uses many of the same constructs, such as open flow rules, to determine how packets travel through the network. For more information, see the Open Virtual Network website.
OVN-Kubernetes is a series of daemons for OVS that translate virtual network configurations into OpenFlow
rules. OpenFlow
is a protocol for communicating with network switches and routers, providing a means for remotely controlling the flow of network traffic on a network device, allowing network administrators to configure, manage, and monitor the flow of network traffic.
OVN-Kubernetes provides more of the advanced functionality not available with OpenFlow
. OVN supports distributed virtual routing, distributed logical switches, access control, DHCP and DNS. OVN implements distributed virtual routing within logic flows which equate to open flows. So for example if you have a pod that sends out a DHCP request on the network, it sends out that broadcast looking for DHCP address there will be a logic flow rule that matches that packet, and it responds giving it a gateway, a DNS server an IP address and so on.
OVN-Kubernetes runs a daemon on each node. There are daemon sets for the databases and for the OVN controller that run on every node. The OVN controller programs the Open vSwitch daemon on the nodes to support the network provider features; egress IPs, firewalls, routers, hybrid networking, IPSEC encryption, IPv6, network policy, network policy logs, hardware offloading and multicast.
23.1.1. OVN-Kubernetes purpose Link kopierenLink in die Zwischenablage kopiert!
The OVN-Kubernetes network plugin is an open-source, fully-featured Kubernetes CNI plugin that uses Open Virtual Network (OVN) to manage network traffic flows. OVN is a community developed, vendor-agnostic network virtualization solution. The OVN-Kubernetes network plugin:
- Uses OVN (Open Virtual Network) to manage network traffic flows. OVN is a community developed, vendor-agnostic network virtualization solution.
- Implements Kubernetes network policy support, including ingress and egress rules.
- Uses the Geneve (Generic Network Virtualization Encapsulation) protocol rather than VXLAN to create an overlay network between nodes.
The OVN-Kubernetes network plugin provides the following advantages over OpenShift SDN.
- Full support for IPv6 single-stack and IPv4/IPv6 dual-stack networking on supported platforms
- Support for hybrid clusters with both Linux and Microsoft Windows workloads
- Optional IPsec encryption of intra-cluster communications
- Offload of network data processing from host CPU to compatible network cards and data processing units (DPUs)
23.1.2. Supported network plugin feature matrix Link kopierenLink in die Zwischenablage kopiert!
Red Hat OpenShift Networking offers two options for the network plugin, OpenShift SDN and OVN-Kubernetes, for the network plugin. The following table summarizes the current feature support for both network plugins:
Feature | OpenShift SDN | OVN-Kubernetes |
---|---|---|
Egress IPs | Supported | Supported |
Egress firewall | Supported | Supported [1] |
Egress router | Supported | Supported [2] |
Hybrid networking | Not supported | Supported |
IPsec encryption for intra-cluster communication | Not supported | Supported |
IPv4 single-stack | Supported | Supported |
IPv6 single-stack | Not supported | Supported [3] |
IPv4/IPv6 dual-stack | Not Supported | Supported [4] |
IPv6/IPv4 dual-stack | Not supported | Supported [5] |
Kubernetes network policy | Supported | Supported |
Kubernetes network policy logs | Not supported | Supported |
Hardware offloading | Not supported | Supported |
Multicast | Supported | Supported |
- Egress firewall is also known as egress network policy in OpenShift SDN. This is not the same as network policy egress.
- Egress router for OVN-Kubernetes supports only redirect mode.
- IPv6 single-stack networking on a bare-metal platform.
- IPv4/IPv6 dual-stack networking on bare-metal, IBM Power®, and IBM Z® platforms.
- IPv6/IPv4 dual-stack networking on bare-metal and IBM Power® platforms.
23.1.3. OVN-Kubernetes IPv6 and dual-stack limitations Link kopierenLink in die Zwischenablage kopiert!
The OVN-Kubernetes network plugin has the following limitations:
For clusters configured for dual-stack networking, both IPv4 and IPv6 traffic must use the same network interface as the default gateway. If this requirement is not met, pods on the host in the
ovnkube-node
daemon set enter theCrashLoopBackOff
state. If you display a pod with a command such asoc get pod -n openshift-ovn-kubernetes -l app=ovnkube-node -o yaml
, thestatus
field contains more than one message about the default gateway, as shown in the following output:I1006 16:09:50.985852 60651 helper_linux.go:73] Found default gateway interface br-ex 192.168.127.1 I1006 16:09:50.985923 60651 helper_linux.go:73] Found default gateway interface ens4 fe80::5054:ff:febe:bcd4 F1006 16:09:50.985939 60651 ovnkube.go:130] multiple gateway interfaces detected: br-ex ens4
I1006 16:09:50.985852 60651 helper_linux.go:73] Found default gateway interface br-ex 192.168.127.1 I1006 16:09:50.985923 60651 helper_linux.go:73] Found default gateway interface ens4 fe80::5054:ff:febe:bcd4 F1006 16:09:50.985939 60651 ovnkube.go:130] multiple gateway interfaces detected: br-ex ens4
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The only resolution is to reconfigure the host networking so that both IP families use the same network interface for the default gateway.
For clusters configured for dual-stack networking, both the IPv4 and IPv6 routing tables must contain the default gateway. If this requirement is not met, pods on the host in the
ovnkube-node
daemon set enter theCrashLoopBackOff
state. If you display a pod with a command such asoc get pod -n openshift-ovn-kubernetes -l app=ovnkube-node -o yaml
, thestatus
field contains more than one message about the default gateway, as shown in the following output:I0512 19:07:17.589083 108432 helper_linux.go:74] Found default gateway interface br-ex 192.168.123.1 F0512 19:07:17.589141 108432 ovnkube.go:133] failed to get default gateway interface
I0512 19:07:17.589083 108432 helper_linux.go:74] Found default gateway interface br-ex 192.168.123.1 F0512 19:07:17.589141 108432 ovnkube.go:133] failed to get default gateway interface
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The only resolution is to reconfigure the host networking so that both IP families contain the default gateway.
23.1.4. Session affinity Link kopierenLink in die Zwischenablage kopiert!
Session affinity is a feature that applies to Kubernetes Service
objects. You can use session affinity if you want to ensure that each time you connect to a <service_VIP>:<Port>, the traffic is always load balanced to the same back end. For more information, including how to set session affinity based on a client’s IP address, see Session affinity.
Stickiness timeout for session affinity
The OVN-Kubernetes network plugin for OpenShift Container Platform calculates the stickiness timeout for a session from a client based on the last packet. For example, if you run a curl
command 10 times, the sticky session timer starts from the tenth packet not the first. As a result, if the client is continuously contacting the service, then the session never times out. The timeout starts when the service has not received a packet for the amount of time set by the timeoutSeconds
parameter.
23.2. OVN-Kubernetes architecture Link kopierenLink in die Zwischenablage kopiert!
23.2.1. Introduction to OVN-Kubernetes architecture Link kopierenLink in die Zwischenablage kopiert!
The following diagram shows the OVN-Kubernetes architecture.
Figure 23.1. OVK-Kubernetes architecture
The key components are:
- Cloud Management System (CMS) - A platform specific client for OVN that provides a CMS specific plugin for OVN integration. The plugin translates the cloud management system’s concept of the logical network configuration, stored in the CMS configuration database in a CMS-specific format, into an intermediate representation understood by OVN.
-
OVN Northbound database (
nbdb
) - Stores the logical network configuration passed by the CMS plugin. -
OVN Southbound database (
sbdb
) - Stores the physical and logical network configuration state for OpenVswitch (OVS) system on each node, including tables that bind them. -
ovn-northd - This is the intermediary client between
nbdb
andsbdb
. It translates the logical network configuration in terms of conventional network concepts, taken from thenbdb
, into logical data path flows in thesbdb
below it. The container name isnorthd
and it runs in theovnkube-master
pods. -
ovn-controller - This is the OVN agent that interacts with OVS and hypervisors, for any information or update that is needed for
sbdb
. Theovn-controller
reads logical flows from thesbdb
, translates them intoOpenFlow
flows and sends them to the node’s OVS daemon. The container name isovn-controller
and it runs in theovnkube-node
pods.
The OVN northbound database has the logical network configuration passed down to it by the cloud management system (CMS). The OVN northbound Database contains the current desired state of the network, presented as a collection of logical ports, logical switches, logical routers, and more. The ovn-northd
(northd
container) connects to the OVN northbound database and the OVN southbound database. It translates the logical network configuration in terms of conventional network concepts, taken from the OVN northbound Database, into logical data path flows in the OVN southbound database.
The OVN southbound database has physical and logical representations of the network and binding tables that link them together. Every node in the cluster is represented in the southbound database, and you can see the ports that are connected to it. It also contains all the logic flows, the logic flows are shared with the ovn-controller
process that runs on each node and the ovn-controller
turns those into OpenFlow
rules to program Open vSwitch
.
The Kubernetes control plane nodes each contain an ovnkube-master
pod which hosts containers for the OVN northbound and southbound databases. All OVN northbound databases form a Raft
cluster and all southbound databases form a separate Raft
cluster. At any given time a single ovnkube-master
is the leader and the other ovnkube-master
pods are followers.
23.2.2. Listing all resources in the OVN-Kubernetes project Link kopierenLink in die Zwischenablage kopiert!
Finding the resources and containers that run in the OVN-Kubernetes project is important to help you understand the OVN-Kubernetes networking implementation.
Prerequisites
-
Access to the cluster as a user with the
cluster-admin
role. -
The OpenShift CLI (
oc
) installed.
Procedure
Run the following command to get all resources, endpoints, and
ConfigMaps
in the OVN-Kubernetes project:oc get all,ep,cm -n openshift-ovn-kubernetes
$ oc get all,ep,cm -n openshift-ovn-kubernetes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow There are three
ovnkube-masters
that run on the control plane nodes, and two daemon sets used to deploy theovnkube-master
andovnkube-node
pods. There is oneovnkube-node
pod for each node in the cluster. In this example, there are 5, and since there is oneovnkube-node
per node in the cluster, there are five nodes in the cluster. Theovnkube-config
ConfigMap
has the OpenShift Container Platform OVN-Kubernetes configurations started by online-master andovnkube-node
. Theovn-kubernetes-master
ConfigMap
has the information of the current online master leader.List all the containers in the
ovnkube-master
pods by running the following command:oc get pods ovnkube-master-9g7zt \ -o jsonpath='{.spec.containers[*].name}' -n openshift-ovn-kubernetes
$ oc get pods ovnkube-master-9g7zt \ -o jsonpath='{.spec.containers[*].name}' -n openshift-ovn-kubernetes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expected output
northd nbdb kube-rbac-proxy sbdb ovnkube-master ovn-dbchecker
northd nbdb kube-rbac-proxy sbdb ovnkube-master ovn-dbchecker
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
ovnkube-master
pod is made up of several containers. It is responsible for hosting the northbound database (nbdb
container), the southbound database (sbdb
container), watching for cluster events for pods, egressIP, namespaces, services, endpoints, egress firewall, and network policy and writing them to the northbound database (ovnkube-master
pod), as well as managing pod subnet allocation to nodes.List all the containers in the
ovnkube-node
pods by running the following command:oc get pods ovnkube-node-jg52r \ -o jsonpath='{.spec.containers[*].name}' -n openshift-ovn-kubernetes
$ oc get pods ovnkube-node-jg52r \ -o jsonpath='{.spec.containers[*].name}' -n openshift-ovn-kubernetes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expected output
ovn-controller ovn-acl-logging kube-rbac-proxy kube-rbac-proxy-ovn-metrics ovnkube-node
ovn-controller ovn-acl-logging kube-rbac-proxy kube-rbac-proxy-ovn-metrics ovnkube-node
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
ovnkube-node
pod has a container (ovn-controller
) that resides on each OpenShift Container Platform node. Each node’sovn-controller
connects the OVN northbound to the OVN southbound database to learn about the OVN configuration. Theovn-controller
connects southbound toovs-vswitchd
as an OpenFlow controller, for control over network traffic, and to the localovsdb-server
to allow it to monitor and control Open vSwitch configuration.
23.2.3. Listing the OVN-Kubernetes northbound database contents Link kopierenLink in die Zwischenablage kopiert!
To understand logic flow rules you need to examine the northbound database and understand what objects are there to see how they are translated into logic flow rules. The up to date information is present on the OVN Raft leader and this procedure describes how to find the Raft leader and subsequently query it to list the OVN northbound database contents.
Prerequisites
-
Access to the cluster as a user with the
cluster-admin
role. -
The OpenShift CLI (
oc
) installed.
Procedure
Find the OVN Raft leader for the northbound database.
NoteThe Raft leader stores the most up to date information.
List the pods by running the following command:
oc get po -n openshift-ovn-kubernetes
$ oc get po -n openshift-ovn-kubernetes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Choose one of the master pods at random and run the following command:
oc exec -n openshift-ovn-kubernetes ovnkube-master-7j97q \ -- /usr/bin/ovn-appctl -t /var/run/ovn/ovnnb_db.ctl \ --timeout=3 cluster/status OVN_Northbound
$ oc exec -n openshift-ovn-kubernetes ovnkube-master-7j97q \ -- /usr/bin/ovn-appctl -t /var/run/ovn/ovnnb_db.ctl \ --timeout=3 cluster/status OVN_Northbound
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Find the
ovnkube-master
pod running on IP Address10.0.242.240
using the following command:oc get po -o wide -n openshift-ovn-kubernetes | grep 10.0.242.240 | grep -v ovnkube-node
$ oc get po -o wide -n openshift-ovn-kubernetes | grep 10.0.242.240 | grep -v ovnkube-node
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
ovnkube-master-gt4ms 6/6 Running 1 (143m ago) 150m 10.0.242.240 ip-10-0-242-240.ec2.internal <none> <none>
ovnkube-master-gt4ms 6/6 Running 1 (143m ago) 150m 10.0.242.240 ip-10-0-242-240.ec2.internal <none> <none>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
ovnkube-master-gt4ms
pod runs on IP Address 10.0.242.240.
Run the following command to show all the objects in the northbound database:
oc exec -n openshift-ovn-kubernetes -it ovnkube-master-gt4ms \ -c northd -- ovn-nbctl show
$ oc exec -n openshift-ovn-kubernetes -it ovnkube-master-gt4ms \ -c northd -- ovn-nbctl show
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output is too long to list here. The list includes the NAT rules, logical switches, load balancers and so on.
Run the following command to display the options available with the command
ovn-nbctl
:oc exec -n openshift-ovn-kubernetes -it ovnkube-master-mk6p6 \ -c northd ovn-nbctl --help
$ oc exec -n openshift-ovn-kubernetes -it ovnkube-master-mk6p6 \ -c northd ovn-nbctl --help
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can narrow down and focus on specific components by using some of the following commands:
Run the following command to show the list of logical routers:
oc exec -n openshift-ovn-kubernetes -it ovnkube-master-gt4ms \ -c northd -- ovn-nbctl lr-list
$ oc exec -n openshift-ovn-kubernetes -it ovnkube-master-gt4ms \ -c northd -- ovn-nbctl lr-list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFrom this output you can see there is router on each node plus an
ovn_cluster_router
.Run the following command to show the list of logical switches:
oc exec -n openshift-ovn-kubernetes -it ovnkube-master-gt4ms \ -c northd -- ovn-nbctl ls-list
$ oc exec -n openshift-ovn-kubernetes -it ovnkube-master-gt4ms \ -c northd -- ovn-nbctl ls-list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFrom this output you can see there is an ext switch for each node plus switches with the node name itself and a join switch.
Run the following command to show the list of load balancers:
oc exec -n openshift-ovn-kubernetes -it ovnkube-master-gt4ms \ -c northd -- ovn-nbctl lb-list
$ oc exec -n openshift-ovn-kubernetes -it ovnkube-master-gt4ms \ -c northd -- ovn-nbctl lb-list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFrom this truncated output you can see there are many OVN-Kubernetes load balancers. Load balancers in OVN-Kubernetes are representations of services.
23.2.4. Command-line arguments for ovn-nbctl to examine northbound database contents Link kopierenLink in die Zwischenablage kopiert!
The following table describes the command-line arguments that can be used with ovn-nbctl
to examine the contents of the northbound database.
Argument | Description |
---|---|
| An overview of the northbound database contents. |
| Show the details associated with the specified switch or router. |
| Show the logical routers. |
|
Using the router information from |
| Show network address translation details for the specified router. |
| Show the logical switches |
|
Using the switch information from |
| Get the type for the logical port. |
| Show the load balancers. |
23.2.5. Listing the OVN-Kubernetes southbound database contents Link kopierenLink in die Zwischenablage kopiert!
Logic flow rules are stored in the southbound database that is a representation of your infrastructure. The up to date information is present on the OVN Raft leader and this procedure describes how to find the Raft leader and query it to list the OVN southbound database contents.
Prerequisites
-
Access to the cluster as a user with the
cluster-admin
role. -
The OpenShift CLI (
oc
) installed.
Procedure
Find the OVN Raft leader for the southbound database.
NoteThe Raft leader stores the most up to date information.
List the pods by running the following command:
oc get po -n openshift-ovn-kubernetes
$ oc get po -n openshift-ovn-kubernetes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Choose one of the master pods at random and run the following command to find the OVN southbound Raft leader:
oc exec -n openshift-ovn-kubernetes ovnkube-master-7j97q \ -- /usr/bin/ovn-appctl -t /var/run/ovn/ovnsb_db.ctl \ --timeout=3 cluster/status OVN_Southbound
$ oc exec -n openshift-ovn-kubernetes ovnkube-master-7j97q \ -- /usr/bin/ovn-appctl -t /var/run/ovn/ovnsb_db.ctl \ --timeout=3 cluster/status OVN_Southbound
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Find the
ovnkube-master
pod running on IP Address10.0.163.212
using the following command:oc get po -o wide -n openshift-ovn-kubernetes | grep 10.0.163.212 | grep -v ovnkube-node
$ oc get po -o wide -n openshift-ovn-kubernetes | grep 10.0.163.212 | grep -v ovnkube-node
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
ovnkube-master-mk6p6 6/6 Running 0 136m 10.0.163.212 ip-10-0-163-212.ec2.internal <none> <none>
ovnkube-master-mk6p6 6/6 Running 0 136m 10.0.163.212 ip-10-0-163-212.ec2.internal <none> <none>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
ovnkube-master-mk6p6
pod runs on IP Address 10.0.163.212.
Run the following command to show all the information stored in the southbound database:
oc exec -n openshift-ovn-kubernetes -it ovnkube-master-mk6p6 \ -c northd -- ovn-sbctl show
$ oc exec -n openshift-ovn-kubernetes -it ovnkube-master-mk6p6 \ -c northd -- ovn-sbctl show
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This detailed output shows the chassis and the ports that are attached to the chassis which in this case are all of the router ports and anything that runs like host networking. Any pods communicate out to the wider network using source network address translation (SNAT). Their IP address is translated into the IP address of the node that the pod is running on and then sent out into the network.
In addition to the chassis information the southbound database has all the logic flows and those logic flows are then sent to the
ovn-controller
running on each of the nodes. Theovn-controller
translates the logic flows into open flow rules and ultimately programsOpenvSwitch
so that your pods can then follow open flow rules and make it out of the network.Run the following command to display the options available with the command
ovn-sbctl
:oc exec -n openshift-ovn-kubernetes -it ovnkube-master-mk6p6 \ -c northd -- ovn-sbctl --help
$ oc exec -n openshift-ovn-kubernetes -it ovnkube-master-mk6p6 \ -c northd -- ovn-sbctl --help
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
23.2.6. Command-line arguments for ovn-sbctl to examine southbound database contents Link kopierenLink in die Zwischenablage kopiert!
The following table describes the command-line arguments that can be used with ovn-sbctl
to examine the contents of the southbound database.
Argument | Description |
---|---|
| Overview of the southbound database contents. |
| List the contents of southbound database for a the specified port . |
| List the logical flows. |
23.2.7. OVN-Kubernetes logical architecture Link kopierenLink in die Zwischenablage kopiert!
OVN is a network virtualization solution. It creates logical switches and routers. These switches and routers are interconnected to create any network topologies. When you run ovnkube-trace
with the log level set to 2 or 5 the OVN-Kubernetes logical components are exposed. The following diagram shows how the routers and switches are connected in OpenShift Container Platform.
Figure 23.2. OVN-Kubernetes router and switch components
The key components involved in packet processing are:
- Gateway routers
-
Gateway routers sometimes called L3 gateway routers, are typically used between the distributed routers and the physical network. Gateway routers including their logical patch ports are bound to a physical location (not distributed), or chassis. The patch ports on this router are known as l3gateway ports in the ovn-southbound database (
ovn-sbdb
). - Distributed logical routers
- Distributed logical routers and the logical switches behind them, to which virtual machines and containers attach, effectively reside on each hypervisor.
- Join local switch
- Join local switches are used to connect the distributed router and gateway routers. It reduces the number of IP addresses needed on the distributed router.
- Logical switches with patch ports
- Logical switches with patch ports are used to virtualize the network stack. They connect remote logical ports through tunnels.
- Logical switches with localnet ports
- Logical switches with localnet ports are used to connect OVN to the physical network. They connect remote logical ports by bridging the packets to directly connected physical L2 segments using localnet ports.
- Patch ports
- Patch ports represent connectivity between logical switches and logical routers and between peer logical routers. A single connection has a pair of patch ports at each such point of connectivity, one on each side.
- l3gateway ports
-
l3gateway ports are the port binding entries in the
ovn-sbdb
for logical patch ports used in the gateway routers. They are called l3gateway ports rather than patch ports just to portray the fact that these ports are bound to a chassis just like the gateway router itself. - localnet ports
-
localnet ports are present on the bridged logical switches that allows a connection to a locally accessible network from each
ovn-controller
instance. This helps model the direct connectivity to the physical network from the logical switches. A logical switch can only have a single localnet port attached to it.
23.2.7.1. Installing network-tools on local host Link kopierenLink in die Zwischenablage kopiert!
Install network-tools
on your local host to make a collection of tools available for debugging OpenShift Container Platform cluster network issues.
Procedure
Clone the
network-tools
repository onto your workstation with the following command:git clone git@github.com:openshift/network-tools.git
$ git clone git@github.com:openshift/network-tools.git
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change into the directory for the repository you just cloned:
cd network-tools
$ cd network-tools
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: List all available commands:
./debug-scripts/network-tools -h
$ ./debug-scripts/network-tools -h
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
23.2.7.2. Running network-tools Link kopierenLink in die Zwischenablage kopiert!
Get information about the logical switches and routers by running network-tools
.
Prerequisites
-
You installed the OpenShift CLI (
oc
). -
You are logged in to the cluster as a user with
cluster-admin
privileges. -
You have installed
network-tools
on local host.
Procedure
List the routers by running the following command:
./debug-scripts/network-tools ovn-db-run-command ovn-nbctl lr-list
$ ./debug-scripts/network-tools ovn-db-run-command ovn-nbctl lr-list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the localnet ports by running the following command:
./debug-scripts/network-tools ovn-db-run-command \ ovn-sbctl find Port_Binding type=localnet
$ ./debug-scripts/network-tools ovn-db-run-command \ ovn-sbctl find Port_Binding type=localnet
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the
l3gateway
ports by running the following command:./debug-scripts/network-tools ovn-db-run-command \ ovn-sbctl find Port_Binding type=l3gateway
$ ./debug-scripts/network-tools ovn-db-run-command \ ovn-sbctl find Port_Binding type=l3gateway
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the patch ports by running the following command:
./debug-scripts/network-tools ovn-db-run-command \ ovn-sbctl find Port_Binding type=patch
$ ./debug-scripts/network-tools ovn-db-run-command \ ovn-sbctl find Port_Binding type=patch
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
23.3. Troubleshooting OVN-Kubernetes Link kopierenLink in die Zwischenablage kopiert!
OVN-Kubernetes has many sources of built-in health checks and logs.
23.3.1. Monitoring OVN-Kubernetes health by using readiness probes Link kopierenLink in die Zwischenablage kopiert!
The ovnkube-master
and ovnkube-node
pods have containers configured with readiness probes.
Prerequisites
-
Access to the OpenShift CLI (
oc
). -
You have access to the cluster with
cluster-admin
privileges. -
You have installed
jq
.
Procedure
Review the details of the
ovnkube-master
readiness probe by running the following command:oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-master \ -o json | jq '.items[0].spec.containers[] | .name,.readinessProbe'
$ oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-master \ -o json | jq '.items[0].spec.containers[] | .name,.readinessProbe'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The readiness probe for the northbound and southbound database containers in the
ovnkube-master
pod checks for the health of the Raft cluster hosting the databases.Review the details of the
ovnkube-node
readiness probe by running the following command:oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-master \ -o json | jq '.items[0].spec.containers[] | .name,.readinessProbe'
$ oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-master \ -o json | jq '.items[0].spec.containers[] | .name,.readinessProbe'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
ovnkube-node
container in theovnkube-node
pod has a readiness probe to verify the presence of the ovn-kubernetes CNI configuration file, the absence of which would indicate that the pod is not running or is not ready to accept requests to configure pods.Show all events including the probe failures, for the namespace by using the following command:
oc get events -n openshift-ovn-kubernetes
$ oc get events -n openshift-ovn-kubernetes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Show the events for just this pod:
oc describe pod ovnkube-master-tp2z8 -n openshift-ovn-kubernetes
$ oc describe pod ovnkube-master-tp2z8 -n openshift-ovn-kubernetes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Show the messages and statuses from the cluster network operator:
oc get co/network -o json | jq '.status.conditions[]'
$ oc get co/network -o json | jq '.status.conditions[]'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Show the
ready
status of each container inovnkube-master
pods by running the following script:for p in $(oc get pods --selector app=ovnkube-master -n openshift-ovn-kubernetes \ -o jsonpath='{range.items[*]}{" "}{.metadata.name}'); do echo === $p ===; \ oc get pods -n openshift-ovn-kubernetes $p -o json | jq '.status.containerStatuses[] | .name, .ready'; \ done
$ for p in $(oc get pods --selector app=ovnkube-master -n openshift-ovn-kubernetes \ -o jsonpath='{range.items[*]}{" "}{.metadata.name}'); do echo === $p ===; \ oc get pods -n openshift-ovn-kubernetes $p -o json | jq '.status.containerStatuses[] | .name, .ready'; \ done
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe expectation is all container statuses are reporting as
true
. Failure of a readiness probe sets the status tofalse
.
23.3.2. Viewing OVN-Kubernetes alerts in the console Link kopierenLink in die Zwischenablage kopiert!
The Alerting UI provides detailed information about alerts and their governing alerting rules and silences.
Prerequisites
- You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing metrics for.
Procedure (UI)
-
In the Administrator perspective, select Observe
Alerting. The three main pages in the Alerting UI in this perspective are the Alerts, Silences, and Alerting Rules pages. -
View the rules for OVN-Kubernetes alerts by selecting Observe
Alerting Alerting Rules.
23.3.3. Viewing OVN-Kubernetes alerts in the CLI Link kopierenLink in die Zwischenablage kopiert!
You can get information about alerts and their governing alerting rules and silences from the command line.
Prerequisites
-
Access to the cluster as a user with the
cluster-admin
role. -
The OpenShift CLI (
oc
) installed. -
You have installed
jq
.
Procedure
View active or firing alerts by running the following commands.
Set the alert manager route environment variable by running the following command:
ALERT_MANAGER=$(oc get route alertmanager-main -n openshift-monitoring \ -o jsonpath='{@.spec.host}')
$ ALERT_MANAGER=$(oc get route alertmanager-main -n openshift-monitoring \ -o jsonpath='{@.spec.host}')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Issue a
curl
request to the alert manager route API with the correct authorization details requesting specific fields by running the following command:curl -s -k -H "Authorization: Bearer \ $(oc create token prometheus-k8s -n openshift-monitoring)" \ https://$ALERT_MANAGER/api/v1/alerts \ | jq '.data[] | "\(.labels.severity) \(.labels.alertname) \(.labels.pod) \(.labels.container) \(.labels.endpoint) \(.labels.instance)"'
$ curl -s -k -H "Authorization: Bearer \ $(oc create token prometheus-k8s -n openshift-monitoring)" \ https://$ALERT_MANAGER/api/v1/alerts \ | jq '.data[] | "\(.labels.severity) \(.labels.alertname) \(.labels.pod) \(.labels.container) \(.labels.endpoint) \(.labels.instance)"'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
View alerting rules by running the following command:
oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -s 'http://localhost:9090/api/v1/rules' | jq '.data.groups[].rules[] | select(((.name|contains("ovn")) or (.name|contains("OVN")) or (.name|contains("Ovn")) or (.name|contains("North")) or (.name|contains("South"))) and .type=="alerting")'
$ oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -s 'http://localhost:9090/api/v1/rules' | jq '.data.groups[].rules[] | select(((.name|contains("ovn")) or (.name|contains("OVN")) or (.name|contains("Ovn")) or (.name|contains("North")) or (.name|contains("South"))) and .type=="alerting")'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
23.3.4. Viewing the OVN-Kubernetes logs using the CLI Link kopierenLink in die Zwischenablage kopiert!
You can view the logs for each of the pods in the ovnkube-master
and ovnkube-node
pods using the OpenShift CLI (oc
).
Prerequisites
-
Access to the cluster as a user with the
cluster-admin
role. -
Access to the OpenShift CLI (
oc
). -
You have installed
jq
.
Procedure
View the log for a specific pod:
oc logs -f <pod_name> -c <container_name> -n <namespace>
$ oc logs -f <pod_name> -c <container_name> -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
-f
- Optional: Specifies that the output follows what is being written into the logs.
<pod_name>
- Specifies the name of the pod.
<container_name>
- Optional: Specifies the name of a container. When a pod has more than one container, you must specify the container name.
<namespace>
- Specify the namespace the pod is running in.
For example:
oc logs ovnkube-master-7h4q7 -n openshift-ovn-kubernetes
$ oc logs ovnkube-master-7h4q7 -n openshift-ovn-kubernetes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc logs -f ovnkube-master-7h4q7 -n openshift-ovn-kubernetes -c ovn-dbchecker
$ oc logs -f ovnkube-master-7h4q7 -n openshift-ovn-kubernetes -c ovn-dbchecker
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The contents of log files are printed out.
Examine the most recent entries in all the containers in the
ovnkube-master
pods:for p in $(oc get pods --selector app=ovnkube-master -n openshift-ovn-kubernetes \ -o jsonpath='{range.items[*]}{" "}{.metadata.name}'); \ do echo === $p ===; for container in $(oc get pods -n openshift-ovn-kubernetes $p \ -o json | jq -r '.status.containerStatuses[] | .name');do echo ---$container---; \ oc logs -c $container $p -n openshift-ovn-kubernetes --tail=5; done; done
$ for p in $(oc get pods --selector app=ovnkube-master -n openshift-ovn-kubernetes \ -o jsonpath='{range.items[*]}{" "}{.metadata.name}'); \ do echo === $p ===; for container in $(oc get pods -n openshift-ovn-kubernetes $p \ -o json | jq -r '.status.containerStatuses[] | .name');do echo ---$container---; \ oc logs -c $container $p -n openshift-ovn-kubernetes --tail=5; done; done
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the last 5 lines of every log in every container in an
ovnkube-master
pod using the following command:oc logs -l app=ovnkube-master -n openshift-ovn-kubernetes --all-containers --tail 5
$ oc logs -l app=ovnkube-master -n openshift-ovn-kubernetes --all-containers --tail 5
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
23.3.5. Viewing the OVN-Kubernetes logs using the web console Link kopierenLink in die Zwischenablage kopiert!
You can view the logs for each of the pods in the ovnkube-master
and ovnkube-node
pods in the web console.
Prerequisites
-
Access to the OpenShift CLI (
oc
).
Procedure
-
In the OpenShift Container Platform console, navigate to Workloads
Pods or navigate to the pod through the resource you want to investigate. -
Select the
openshift-ovn-kubernetes
project from the drop-down menu. - Click the name of the pod you want to investigate.
-
Click Logs. By default for the
ovnkube-master
the logs associated with thenorthd
container are displayed. - Use the down-down menu to select logs for each container in turn.
23.3.5.1. Changing the OVN-Kubernetes log levels Link kopierenLink in die Zwischenablage kopiert!
The default log level for OVN-Kubernetes is 2. To debug OVN-Kubernetes set the log level to 5. Follow this procedure to increase the log level of the OVN-Kubernetes to help you debug an issue.
Prerequisites
-
You have access to the cluster with
cluster-admin
privileges. - You have access to the OpenShift Container Platform web console.
Procedure
Run the following command to get detailed information for all pods in the OVN-Kubernetes project:
oc get po -o wide -n openshift-ovn-kubernetes
$ oc get po -o wide -n openshift-ovn-kubernetes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
ConfigMap
file similar to the following example and use a filename such asenv-overrides.yaml
:Example
ConfigMap
fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
ConfigMap
file by using the following command:oc apply -n openshift-ovn-kubernetes -f env-overrides.yaml
$ oc apply -n openshift-ovn-kubernetes -f env-overrides.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
configmap/env-overrides.yaml created
configmap/env-overrides.yaml created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the
ovnkube
pods to apply the new log level by using the following commands:oc delete pod -n openshift-ovn-kubernetes \ --field-selector spec.nodeName=ip-10-0-217-114.ec2.internal -l app=ovnkube-node
$ oc delete pod -n openshift-ovn-kubernetes \ --field-selector spec.nodeName=ip-10-0-217-114.ec2.internal -l app=ovnkube-node
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete pod -n openshift-ovn-kubernetes \ --field-selector spec.nodeName=ip-10-0-209-180.ec2.internal -l app=ovnkube-node
$ oc delete pod -n openshift-ovn-kubernetes \ --field-selector spec.nodeName=ip-10-0-209-180.ec2.internal -l app=ovnkube-node
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete pod -n openshift-ovn-kubernetes -l app=ovnkube-master
$ oc delete pod -n openshift-ovn-kubernetes -l app=ovnkube-master
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
23.3.6. Checking the OVN-Kubernetes pod network connectivity Link kopierenLink in die Zwischenablage kopiert!
The connectivity check controller, in OpenShift Container Platform 4.10 and later, orchestrates connection verification checks in your cluster. These include Kubernetes API, OpenShift API and individual nodes. The results for the connection tests are stored in PodNetworkConnectivity
objects in the openshift-network-diagnostics
namespace. Connection tests are performed every minute in parallel.
Prerequisites
-
Access to the OpenShift CLI (
oc
). -
Access to the cluster as a user with the
cluster-admin
role. -
You have installed
jq
.
Procedure
To list the current
PodNetworkConnectivityCheck
objects, enter the following command:oc get podnetworkconnectivitychecks -n openshift-network-diagnostics
$ oc get podnetworkconnectivitychecks -n openshift-network-diagnostics
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the most recent success for each connection object by using the following command:
oc get podnetworkconnectivitychecks -n openshift-network-diagnostics \ -o json | jq '.items[]| .spec.targetEndpoint,.status.successes[0]'
$ oc get podnetworkconnectivitychecks -n openshift-network-diagnostics \ -o json | jq '.items[]| .spec.targetEndpoint,.status.successes[0]'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the most recent failures for each connection object by using the following command:
oc get podnetworkconnectivitychecks -n openshift-network-diagnostics \ -o json | jq '.items[]| .spec.targetEndpoint,.status.failures[0]'
$ oc get podnetworkconnectivitychecks -n openshift-network-diagnostics \ -o json | jq '.items[]| .spec.targetEndpoint,.status.failures[0]'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the most recent outages for each connection object by using the following command:
oc get podnetworkconnectivitychecks -n openshift-network-diagnostics \ -o json | jq '.items[]| .spec.targetEndpoint,.status.outages[0]'
$ oc get podnetworkconnectivitychecks -n openshift-network-diagnostics \ -o json | jq '.items[]| .spec.targetEndpoint,.status.outages[0]'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The connectivity check controller also logs metrics from these checks into Prometheus.
View all the metrics by running the following command:
oc exec prometheus-k8s-0 -n openshift-monitoring -- \ promtool query instant http://localhost:9090 \ '{component="openshift-network-diagnostics"}'
$ oc exec prometheus-k8s-0 -n openshift-monitoring -- \ promtool query instant http://localhost:9090 \ '{component="openshift-network-diagnostics"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the latency between the source pod and the openshift api service for the last 5 minutes:
oc exec prometheus-k8s-0 -n openshift-monitoring -- \ promtool query instant http://localhost:9090 \ '{component="openshift-network-diagnostics"}'
$ oc exec prometheus-k8s-0 -n openshift-monitoring -- \ promtool query instant http://localhost:9090 \ '{component="openshift-network-diagnostics"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
23.4. Tracing Openflow with ovnkube-trace Link kopierenLink in die Zwischenablage kopiert!
OVN and OVS traffic flows can be simulated in a single utility called ovnkube-trace
. The ovnkube-trace
utility runs ovn-trace
, ovs-appctl ofproto/trace
and ovn-detrace
and correlates that information in a single output.
You can execute the ovnkube-trace
binary from a dedicated container. For releases after OpenShift Container Platform 4.7, you can also copy the binary to a local host and execute it from that host.
The binaries in the Quay images do not currently work for Dual IP stack or IPv6 only environments. For those environments, you must build from source.
23.4.1. Installing the ovnkube-trace on local host Link kopierenLink in die Zwischenablage kopiert!
The ovnkube-trace
tool traces packet simulations for arbitrary UDP or TCP traffic between points in an OVN-Kubernetes driven OpenShift Container Platform cluster. Copy the ovnkube-trace
binary to your local host making it available to run against the cluster.
Prerequisites
-
You installed the OpenShift CLI (
oc
). -
You are logged in to the cluster with a user with
cluster-admin
privileges.
Procedure
Create a pod variable by using the following command:
POD=$(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-master -o name | head -1 | awk -F '/' '{print $NF}')
$ POD=$(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-master -o name | head -1 | awk -F '/' '{print $NF}')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command on your local host to copy the binary from the
ovnkube-master
pods:oc cp -n openshift-ovn-kubernetes $POD:/usr/bin/ovnkube-trace ovnkube-trace
$ oc cp -n openshift-ovn-kubernetes $POD:/usr/bin/ovnkube-trace ovnkube-trace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Make
ovnkube-trace
executable by running the following command:chmod +x ovnkube-trace
$ chmod +x ovnkube-trace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Display the options available with
ovnkube-trace
by running the following command:./ovnkube-trace -help
$ ./ovnkube-trace -help
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expected output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The command-line arguments supported are familiar Kubernetes constructs, such as namespaces, pods, services so you do not need to find the MAC address, the IP address of the destination nodes, or the ICMP type.
The log levels are:
- 0 (minimal output)
- 2 (more verbose output showing results of trace commands)
- 5 (debug output)
23.4.2. Running ovnkube-trace Link kopierenLink in die Zwischenablage kopiert!
Run ovn-trace
to simulate packet forwarding within an OVN logical network.
Prerequisites
-
You installed the OpenShift CLI (
oc
). -
You are logged in to the cluster with a user with
cluster-admin
privileges. -
You have installed
ovnkube-trace
on local host
Example: Testing that DNS resolution works from a deployed pod
This example illustrates how to test the DNS resolution from a deployed pod to the core DNS pod that runs in the cluster.
Procedure
Start a web service in the default namespace by entering the following command:
oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80
$ oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the pods running in the
openshift-dns
namespace:oc get pods -n openshift-dns
oc get pods -n openshift-dns
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following
ovn-kube-trace
command to verify DNS resolution is working:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expected output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The ouput indicates success from the deployed pod to the DNS port and also indicates that it is successful going back in the other direction. So you know bi-directional traffic is supported on UDP port 53 if my web pod wants to do dns resolution from core DNS.
If for example that did not work and you wanted to get the ovn-trace
, the ovs-appctl ofproto/trace
and ovn-detrace
, and more debug type information increase the log level to 2 and run the command again as follows:
The output from this increased log level is too much to list here. In a failure situation the output of this command shows which flow is dropping that traffic. For example an egress or ingress network policy may be configured on the cluster that does not allow that traffic.
Example: Verifying by using debug output a configured default deny
This example illustrates how to identify by using the debug output that an ingress default deny policy blocks traffic.
Procedure
Create the following YAML that defines a
deny-by-default
policy to deny ingress from all pods in all namespaces. Save the YAML in thedeny-by-default.yaml
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the policy by entering the following command:
oc apply -f deny-by-default.yaml
$ oc apply -f deny-by-default.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
networkpolicy.networking.k8s.io/deny-by-default created
networkpolicy.networking.k8s.io/deny-by-default created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start a web service in the
default
namespace by entering the following command:oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80
$ oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to create the
prod
namespace:oc create namespace prod
$ oc create namespace prod
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to label the
prod
namespace:oc label namespace/prod purpose=production
$ oc label namespace/prod purpose=production
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to deploy an
alpine
image in theprod
namespace and start a shell:oc run test-6459 --namespace=prod --rm -i -t --image=alpine -- sh
$ oc run test-6459 --namespace=prod --rm -i -t --image=alpine -- sh
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Open another terminal session.
In this new terminal session run
ovn-trace
to verify the failure in communication between the source podtest-6459
running in namespaceprod
and destination pod running in thedefault
namespace:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expected output
I0116 14:20:47.380775 50822 ovs.go:90] Maximum command line arguments set to: 191102 ovn-trace source pod to destination pod indicates failure from test-6459 to web
I0116 14:20:47.380775 50822 ovs.go:90] Maximum command line arguments set to: 191102 ovn-trace source pod to destination pod indicates failure from test-6459 to web
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Increase the log level to 2 to expose the reason for the failure by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expected output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Ingress traffic is blocked due to the default deny policy being in place
Create a policy that allows traffic from all pods in a particular namespaces with a label
purpose=production
. Save the YAML in theweb-allow-prod.yaml
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the policy by entering the following command:
oc apply -f web-allow-prod.yaml
$ oc apply -f web-allow-prod.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run
ovnkube-trace
to verify that traffic is now allowed by entering the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expected output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the open shell run the following command:
wget -qO- --timeout=2 http://web.default
wget -qO- --timeout=2 http://web.default
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expected output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
23.5. Migrating from the OpenShift SDN network plugin Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can migrate to the OVN-Kubernetes network plugin from the OpenShift SDN network plugin.
You can use the offline migration method for migrating from the OpenShift SDN network plugin to the OVN-Kubernetes plugin. The offline migration method is a manual process that includes some downtime.
23.5.1. Migration to the OVN-Kubernetes network plugin Link kopierenLink in die Zwischenablage kopiert!
Migrating to the OVN-Kubernetes network plugin is a manual process that includes some downtime during which your cluster is unreachable.
Before you migrate your OpenShift Container Platform cluster to use the OVN-Kubernetes network plugin, update your cluster to the latest z-stream release so that all the latest bug fixes apply to your cluster.
Although a rollback procedure is provided, the migration is intended to be a one-way process.
A migration to the OVN-Kubernetes network plugin is supported on the following platforms:
- Bare metal hardware
- Amazon Web Services (AWS)
- Google Cloud Platform (GCP)
- IBM Cloud®
- Microsoft Azure
- Red Hat OpenStack Platform (RHOSP)
- Red Hat Virtualization (RHV)
- {vmw-first}
Migrating to or from the OVN-Kubernetes network plugin is not supported for managed OpenShift cloud services such as Red Hat OpenShift Dedicated, Azure Red Hat OpenShift(ARO), and Red Hat OpenShift Service on AWS (ROSA).
Migrating from OpenShift SDN network plugin to OVN-Kubernetes network plugin is not supported on Nutanix.
23.5.1.1. Considerations for migrating to the OVN-Kubernetes network plugin Link kopierenLink in die Zwischenablage kopiert!
If you have more than 150 nodes in your OpenShift Container Platform cluster, then open a support case for consultation on your migration to the OVN-Kubernetes network plugin.
The subnets assigned to nodes and the IP addresses assigned to individual pods are not preserved during the migration.
While the OVN-Kubernetes network plugin implements many of the capabilities present in the OpenShift SDN network plugin, the configuration is not the same.
If your cluster uses any of the following OpenShift SDN network plugin capabilities, you must manually configure the same capability in the OVN-Kubernetes network plugin:
- Namespace isolation
- Egress router pods
-
If your cluster or surrounding network uses any part of the
100.64.0.0/16
address range, you must choose another unused IP range by specifying thev4InternalSubnet
spec under thespec.defaultNetwork.ovnKubernetesConfig
object definition. OVN-Kubernetes uses the IP range100.64.0.0/16
internally by default. -
If your
openshift-sdn
cluster with Precision Time Protocol (PTP) uses the User Datagram Protocol (UDP) for hardware time stamping and you migrate to the OVN-Kubernetes plugin, the hardware time stamping cannot be applied to primary interface devices, such as an Open vSwitch (OVS) bridge. As a result, UDP version 4 configurations cannot work with abr-ex
interface.
The following sections highlight the differences in configuration between the aforementioned capabilities in OVN-Kubernetes and OpenShift SDN network plugins.
Primary network interface
The OpenShift SDN plugin allows application of the NodeNetworkConfigurationPolicy
(NNCP) custom resource (CR) to the primary interface on a node. The OVN-Kubernetes network plugin does not have this capability.
If you have an NNCP applied to the primary interface, you must delete the NNCP before migrating to the OVN-Kubernetes network plugin. Deleting the NNCP does not remove the configuration from the primary interface, but with OVN-Kubernetes, the Kubernetes NMState cannot manage this configuration. Instead, the configure-ovs.sh
shell script manages the primary interface and the configuration attached to this interface.
Namespace isolation
OVN-Kubernetes supports only the network policy isolation mode.
For a cluster using OpenShift SDN that is configured in either the multitenant or subnet isolation mode, you can still migrate to the OVN-Kubernetes network plugin. Note that after the migration operation, multitenant isolation mode is dropped, so you must manually configure network policies to achieve the same level of project-level isolation for pods and services.
Egress IP addresses
OpenShift SDN supports two different Egress IP modes:
- In the automatically assigned approach, an egress IP address range is assigned to a node.
- In the manually assigned approach, a list of one or more egress IP addresses is assigned to a node.
The migration process supports migrating Egress IP configurations that use the automatically assigned mode.
The differences in configuring an egress IP address between OVN-Kubernetes and OpenShift SDN is described in the following table:
OVN-Kubernetes | OpenShift SDN |
---|---|
|
|
For more information on using egress IP addresses in OVN-Kubernetes, see "Configuring an egress IP address".
Egress network policies
The difference in configuring an egress network policy, also known as an egress firewall, between OVN-Kubernetes and OpenShift SDN is described in the following table:
OVN-Kubernetes | OpenShift SDN |
---|---|
|
|
Because the name of an EgressFirewall
object can only be set to default
, after the migration all migrated EgressNetworkPolicy
objects are named default
, regardless of what the name was under OpenShift SDN.
If you subsequently rollback to OpenShift SDN, all EgressNetworkPolicy
objects are named default
as the prior name is lost.
For more information on using an egress firewall in OVN-Kubernetes, see "Configuring an egress firewall for a project".
Egress router pods
OVN-Kubernetes supports egress router pods in redirect mode. OVN-Kubernetes does not support egress router pods in HTTP proxy mode or DNS proxy mode.
When you deploy an egress router with the Cluster Network Operator, you cannot specify a node selector to control which node is used to host the egress router pod.
Multicast
The difference between enabling multicast traffic on OVN-Kubernetes and OpenShift SDN is described in the following table:
OVN-Kubernetes | OpenShift SDN |
---|---|
|
|
For more information on using multicast in OVN-Kubernetes, see "Enabling multicast for a project".
Network policies
OVN-Kubernetes fully supports the Kubernetes NetworkPolicy
API in the networking.k8s.io/v1
API group. No changes are necessary in your network policies when migrating from OpenShift SDN.
23.5.1.2. How the migration process works Link kopierenLink in die Zwischenablage kopiert!
The following table summarizes the migration process by segmenting between the user-initiated steps in the process and the actions that the migration performs in response.
User-initiated steps | Migration activity |
---|---|
Set the |
|
Update the |
|
Reboot each node in the cluster. |
|
If a rollback to OpenShift SDN is required, the following table describes the process.
You must wait until the migration process from OpenShift SDN to OVN-Kubernetes network plugin is successful before initiating a rollback.
User-initiated steps | Migration activity |
---|---|
Suspend the MCO to ensure that it does not interrupt the migration. | The MCO stops. |
Set the |
|
Update the |
|
Reboot each node in the cluster. |
|
Enable the MCO after all nodes in the cluster reboot. |
|
23.5.1.3. Using an Ansible playbook to migrate to the OVN-Kubernetes network plugin Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can use an Ansible collection, network.offline_migration_sdn_to_ovnk
, to migrate from the OpenShift SDN Container Network Interface (CNI) network plugin to the OVN-Kubernetes plugin for your cluster. The Ansible collection includes the following playbooks:
-
playbooks/playbook-migration.yml
: Includes playbooks that execute in a sequence where each playbook represents a step in the migration process. -
playbooks/playbook-rollback.yml
: Includes playbooks that execute in a sequence where each playbook represents a step in the rollback process.
Prerequisites
-
You installed the
python3
package, minimum version 3.10. -
You installed the
jmespath
andjq
packages. - You logged in to the Red Hat Hybrid Cloud Console and opened the Ansible Automation Platform web console.
-
You created a security group rule that allows User Datagram Protocol (UDP) packets on port
6081
for all nodes on all cloud platforms. If you do not do this task, your cluster might fail to schedule pods. Check if your cluster uses static routes or routing policies in the host network.
-
If true, a later procedure step requires that you set the
routingViaHost
parameter totrue
and theipForwarding
parameter toGlobal
in thegatewayConfig
section of theplaybooks/playbook-migration.yml
file.
-
If true, a later procedure step requires that you set the
-
If the OpenShift-SDN plugin uses the
100.64.0.0/16
and100.88.0.0/16
address ranges, you patched the address ranges. For more information, see "Patching OVN-Kubernetes address ranges" in the Additional resources section.
Procedure
Install the
ansible-core
package, minimum version 2.15. The following example command shows how to install theansible-core
package on Red Hat Enterprise Linux (RHEL):sudo dnf install -y ansible-core
$ sudo dnf install -y ansible-core
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an
ansible.cfg
file and add information similar to the following example to the file. Ensure that file exists in the same directory as where theansible-galaxy
commands and the playbooks run.Copy to Clipboard Copied! Toggle word wrap Toggle overflow From the Ansible Automation Platform web console, go to the Connect to Hub page and complete the following steps:
- In the Offline token section of the page, click the Load token button.
- After the token loads, click the Copy to clipboard icon.
-
Open the
ansible.cfg
file and paste the API token in thetoken=
parameter. The API token is required for authenticating against the server URL specified in theansible.cfg
file.
Install the
network.offline_migration_sdn_to_ovnk
Ansible collection by entering the followingansible-galaxy
command:ansible-galaxy collection install network.offline_migration_sdn_to_ovnk
$ ansible-galaxy collection install network.offline_migration_sdn_to_ovnk
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
network.offline_migration_sdn_to_ovnk
Ansible collection is installed on your system:ansible-galaxy collection list | grep network.offline_migration_sdn_to_ovnk
$ ansible-galaxy collection list | grep network.offline_migration_sdn_to_ovnk
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
network.offline_migration_sdn_to_ovnk 1.0.2
network.offline_migration_sdn_to_ovnk 1.0.2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
network.offline_migration_sdn_to_ovnk
Ansible collection is saved in the default path of~/.ansible/collections/ansible_collections/network/offline_migration_sdn_to_ovnk/
.Configure migration features in the
playbooks/playbook-migration.yml
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow migration_interface_name
-
If you use an
NodeNetworkConfigurationPolicy
(NNCP) resource on a primary interface, specify the interface name in themigration-playbook.yml
file so that the NNCP resource gets deleted on the primary interface during the migration process. migration_disable_auto_migration
-
Disables the auto-migration of OpenShift SDN CNI plug-in features to the OVN-Kubernetes plugin. If you disable auto-migration of features, you must also set the
migration_egress_ip
,migration_egress_firewall
, andmigration_multicast
parameters tofalse
. If you need to enable auto-migration of features, set the parameter tofalse
. migration_routing_via_host
-
Set to
true
to configure local gateway mode orfalse
to configure shared gateway mode for nodes in your cluster. The default value isfalse
. In local gateway mode, traffic is routed through the host network stack. In shared gateway mode, traffic is not routed through the host network stack. migration_ip_forwarding
-
If you configured local gateway mode, set IP forwarding to
Global
if you need the host network of the node to act as a router for traffic not related to OVN-Kubernetes. migration_cidr
-
Specifies a Classless Inter-Domain Routing (CIDR) IP address block for your cluster. You cannot use any CIDR block that overlaps the
100.64.0.0/16
CIDR block, because the OVN-Kubernetes network provider uses this block internally. migration_prefix
- Ensure that you specify a prefix value, which is the slice of the CIDR block apportioned to each node in your cluster.
migration_mtu
- Optional parameter that sets a specific maximum transmission unit (MTU) to your cluster network after the migration process.
migration_geneve_port
-
Optional parameter that sets a Geneve port for OVN-Kubernetes. The default port is
6081
. migration_ipv4_subnet
-
Optional parameter that sets an IPv4 address range for internal use by OVN-Kubernetes. The default value for the parameter is
100.64.0.0/16
.
To run the
playbooks/playbook-migration.yml
file, enter the following command:ansible-playbook -v playbooks/playbook-migration.yml
$ ansible-playbook -v playbooks/playbook-migration.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
23.5.2. Migrating to the OVN-Kubernetes network plugin Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can change the network plugin for your cluster to OVN-Kubernetes. During the migration, you must reboot every node in your cluster.
While performing the migration, your cluster is unavailable and workloads might be interrupted. Perform the migration only when an interruption in service is acceptable.
Prerequisites
- You have a cluster configured with the OpenShift SDN CNI network plugin in the network policy isolation mode.
-
You installed the OpenShift CLI (
oc
). -
You have access to the cluster as a user with the
cluster-admin
role. - You have a recent backup of the etcd database.
- You can manually reboot each node.
- You checked that your cluster is in a known good state without any errors.
-
You created a security group rule that allows User Datagram Protocol (UDP) packets on port
6081
for all nodes on all cloud platforms.
Procedure
To backup the configuration for the cluster network, enter the following command:
oc get Network.config.openshift.io cluster -o yaml > cluster-openshift-sdn.yaml
$ oc get Network.config.openshift.io cluster -o yaml > cluster-openshift-sdn.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
OVN_SDN_MIGRATION_TIMEOUT
environment variable is set and is equal to0s
by running the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the configuration from the Cluster Network Operator (CNO) configuration object by running the following command:
oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{"spec":{"migration":null}}'
$ oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{"spec":{"migration":null}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
NodeNetworkConfigurationPolicy
(NNCP) custom resource (CR) that defines the primary network interface for the OpenShift SDN network plugin by completing the following steps:Check that the existing NNCP CR bonded the primary interface to your cluster by entering the following command:
oc get nncp
$ oc get nncp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME STATUS REASON bondmaster0 Available SuccessfullyConfigured
NAME STATUS REASON bondmaster0 Available SuccessfullyConfigured
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Network Manager stores the connection profile for the bonded primary interface in the
/etc/NetworkManager/system-connections
system path.Remove the NNCP from your cluster:
oc delete nncp <nncp_manifest_filename>
$ oc delete nncp <nncp_manifest_filename>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
To prepare all the nodes for the migration, set the
migration
field on the CNO configuration object by running the following command:oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": { "networkType": "OVNKubernetes" } } }'
$ oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": { "networkType": "OVNKubernetes" } } }'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis step does not deploy OVN-Kubernetes immediately. Instead, specifying the
migration
field triggers the Machine Config Operator (MCO) to apply new machine configs to all the nodes in the cluster in preparation for the OVN-Kubernetes deployment.Check that the reboot is finished by running the following command:
oc get mcp
$ oc get mcp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that all cluster Operators are available by running the following command:
oc get co
$ oc get co
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively: You can disable automatic migration of several OpenShift SDN capabilities to the OVN-Kubernetes equivalents:
- Egress IPs
- Egress firewall
- Multicast
To disable automatic migration of the configuration for any of the previously noted OpenShift SDN features, specify the following keys:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
bool
: Specifies whether to enable migration of the feature. The default istrue
.
Optional: You can customize the following settings for OVN-Kubernetes to meet your network infrastructure requirements:
Maximum transmission unit (MTU). Consider the following before customizing the MTU for this optional step:
- If you use the default MTU, and you want to keep the default MTU during migration, this step can be ignored.
- If you used a custom MTU, and you want to keep the custom MTU during migration, you must declare the custom MTU value in this step.
This step does not work if you want to change the MTU value during migration. Instead, you must first follow the instructions for "Changing the cluster MTU". You can then keep the custom MTU value by performing this procedure and declaring the custom MTU value in this step.
NoteOpenShift-SDN and OVN-Kubernetes have different overlay overhead. MTU values should be selected by following the guidelines found on the "MTU value selection" page.
- Geneve (Generic Network Virtualization Encapsulation) overlay network port
- OVN-Kubernetes IPv4 internal subnet
To customize either of the previously noted settings, enter and customize the following command. If you do not need to change the default value, omit the key from the patch.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
mtu
-
The MTU for the Geneve overlay network. This value is normally configured automatically, but if the nodes in your cluster do not all use the same MTU, then you must set this explicitly to
100
less than the smallest node MTU value. port
-
The UDP port for the Geneve overlay network. If a value is not specified, the default is
6081
. The port cannot be the same as the VXLAN port that is used by OpenShift SDN. The default value for the VXLAN port is4789
. ipv4_subnet
-
An IPv4 address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is
100.64.0.0/16
.
Example patch command to update
mtu
fieldCopy to Clipboard Copied! Toggle word wrap Toggle overflow As the MCO updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command:
oc get mcp
$ oc get mcp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow A successfully updated node has the following status:
UPDATED=true
,UPDATING=false
,DEGRADED=false
.NoteBy default, the MCO updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster.
Confirm the status of the new machine configuration on the hosts:
To list the machine configuration state and the name of the applied machine configuration, enter the following command:
oc describe node | egrep "hostname|machineconfig"
$ oc describe node | egrep "hostname|machineconfig"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done
kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the following statements are true:
-
The value of
machineconfiguration.openshift.io/state
field isDone
. -
The value of the
machineconfiguration.openshift.io/currentConfig
field is equal to the value of themachineconfiguration.openshift.io/desiredConfig
field.
-
The value of
To confirm that the machine config is correct, enter the following command:
oc get machineconfig <config_name> -o yaml | grep ExecStart
$ oc get machineconfig <config_name> -o yaml | grep ExecStart
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where
<config_name>
is the name of the machine config from themachineconfiguration.openshift.io/currentConfig
field.The machine config must include the following update to the systemd configuration:
ExecStart=/usr/local/bin/configure-ovs.sh OVNKubernetes
ExecStart=/usr/local/bin/configure-ovs.sh OVNKubernetes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If a node is stuck in the
NotReady
state, investigate the machine config daemon pod logs and resolve any errors.To list the pods, enter the following command:
oc get pod -n openshift-machine-config-operator
$ oc get pod -n openshift-machine-config-operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The names for the config daemon pods are in the following format:
machine-config-daemon-<seq>
. The<seq>
value is a random five character alphanumeric sequence.Display the pod log for the first machine config daemon pod shown in the previous output by enter the following command:
oc logs <pod> -n openshift-machine-config-operator
$ oc logs <pod> -n openshift-machine-config-operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where
pod
is the name of a machine config daemon pod.- Resolve any errors in the logs shown by the output from the previous command.
To start the migration, configure the OVN-Kubernetes network plugin by using one of the following commands:
To specify the network provider without changing the cluster network IP address block, enter the following command:
oc patch Network.config.openshift.io cluster \ --type='merge' --patch '{ "spec": { "networkType": "OVNKubernetes" } }'
$ oc patch Network.config.openshift.io cluster \ --type='merge' --patch '{ "spec": { "networkType": "OVNKubernetes" } }'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To specify a different cluster network IP address block, enter the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where
cidr
is a CIDR block andprefix
is the slice of the CIDR block apportioned to each node in your cluster. You cannot use any CIDR block that overlaps with the100.64.0.0/16
CIDR block because the OVN-Kubernetes network provider uses this block internally.ImportantYou cannot change the service network address block during the migration.
Verify that the Multus daemon set rollout is complete before continuing with subsequent steps:
oc -n openshift-multus rollout status daemonset/multus
$ oc -n openshift-multus rollout status daemonset/multus
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The name of the Multus pods is in the form of
multus-<xxxxx>
where<xxxxx>
is a random sequence of letters. It might take several moments for the pods to restart.Example output
Waiting for daemon set "multus" rollout to finish: 1 out of 6 new pods have been updated... ... Waiting for daemon set "multus" rollout to finish: 5 of 6 updated pods are available... daemon set "multus" successfully rolled out
Waiting for daemon set "multus" rollout to finish: 1 out of 6 new pods have been updated... ... Waiting for daemon set "multus" rollout to finish: 5 of 6 updated pods are available... daemon set "multus" successfully rolled out
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To complete changing the network plugin, reboot each node in your cluster. You can reboot the nodes in your cluster with either of the following approaches:
ImportantThe following scripts reboot all of the nodes in the cluster at the same time. This can cause your cluster to be unstable. Another option is to reboot your nodes manually one at a time. Rebooting nodes one-by-one causes considerable downtime in a cluster with many nodes.
Cluster Operators will not work correctly before you reboot the nodes.
With the
oc rsh
command, you can use a bash script similar to the following:Copy to Clipboard Copied! Toggle word wrap Toggle overflow With the
ssh
command, you can use a bash script similar to the following. The script assumes that you have configured sudo to not prompt for a password.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Confirm that the migration succeeded:
To confirm that the network plugin is OVN-Kubernetes, enter the following command. The value of
status.networkType
must beOVNKubernetes
.oc get network.config/cluster -o jsonpath='{.status.networkType}{"\n"}'
$ oc get network.config/cluster -o jsonpath='{.status.networkType}{"\n"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To confirm that the cluster nodes are in the
Ready
state, enter the following command:oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To confirm that your pods are not in an error state, enter the following command:
oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}'
$ oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If pods on a node are in an error state, reboot that node.
To confirm that all of the cluster Operators are not in an abnormal state, enter the following command:
oc get co
$ oc get co
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The status of every cluster Operator must be the following:
AVAILABLE="True"
,PROGRESSING="False"
,DEGRADED="False"
. If a cluster Operator is not available or degraded, check the logs for the cluster Operator for more information.
Complete the following steps only if the migration succeeds and your cluster is in a good state:
To remove the migration configuration from the CNO configuration object, enter the following command:
oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": null } }'
$ oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": null } }'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To remove custom configuration for the OpenShift SDN network provider, enter the following command:
oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "defaultNetwork": { "openshiftSDNConfig": null } } }'
$ oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "defaultNetwork": { "openshiftSDNConfig": null } } }'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To remove the OpenShift SDN network provider namespace, enter the following command:
oc delete namespace openshift-sdn
$ oc delete namespace openshift-sdn
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
- Optional: After cluster migration, you can convert your IPv4 single-stack cluster to a dual-network cluster network that supports IPv4 and IPv6 address families. For more information, see "Converting to IPv4/IPv6 dual-stack networking".
23.5.4. Understanding changes to external IP behavior in OVN-Kubernetes Link kopierenLink in die Zwischenablage kopiert!
When migrating from OpenShift SDN to OVN-Kubernetes (OVN-K), services that use external IPs might become inaccessible across namespaces due to network policy enforcement.
In OpenShift SDN, external IPs were accessible across namespaces by default. However, in OVN-K, network policies strictly enforce multitenant isolation, preventing access to services exposed via external IPs from other namespaces.
To ensure access, consider the following alternatives:
- Use an ingress or route: Instead of exposing services by using external IPs, configure an ingress or route to allow external access while maintaining security controls.
-
Adjust the
NetworkPolicy
custom resource (CR): Modify aNetworkPolicy
CR to explicitly allow access from required namespaces and ensure that traffic is allowed to the designated service ports. Without explicitly allowing traffic to the required ports, access might still be blocked, even if the namespace is allowed. -
Use a
LoadBalancer
service: If applicable, deploy aLoadBalancer
service instead of relying on external IPs. For more information about configuring see "NetworkPolicy and external IPs in OVN-Kubernetes".
23.6. Rolling back to the OpenShift SDN network provider Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can rollback to the OpenShift SDN from the OVN-Kubernetes network plugin only after the migration to the OVN-Kubernetes network plugin is completed and successful.
23.6.1. Migrating to the OpenShift SDN network plugin Link kopierenLink in die Zwischenablage kopiert!
Cluster administrators can roll back to the OpenShift SDN Container Network Interface (CNI) network plugin by using the offline migration method. During the migration you must manually reboot every node in your cluster. With the offline migration method, there is some downtime, during which your cluster is unreachable.
You must wait until the migration process from OpenShift SDN to OVN-Kubernetes network plugin is successful before initiating a rollback.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Access to the cluster as a user with the
cluster-admin
role. - A cluster installed on infrastructure configured with the OVN-Kubernetes network plugin.
- A recent backup of the etcd database is available.
- A reboot can be triggered manually for each node.
- The cluster is in a known good state, without any errors.
Procedure
Stop all of the machine configuration pools managed by the Machine Config Operator (MCO):
Stop the
master
configuration pool by entering the following command in your CLI:oc patch MachineConfigPool master --type='merge' --patch \ '{ "spec": { "paused": true } }'
$ oc patch MachineConfigPool master --type='merge' --patch \ '{ "spec": { "paused": true } }'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Stop the
worker
machine configuration pool by entering the following command in your CLI:oc patch MachineConfigPool worker --type='merge' --patch \ '{ "spec":{ "paused": true } }'
$ oc patch MachineConfigPool worker --type='merge' --patch \ '{ "spec":{ "paused": true } }'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
To prepare for the migration, set the migration field to
null
by entering the following command in your CLI:oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": null } }'
$ oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": null } }'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the migration status is empty for the
Network.config.openshift.io
object by entering the following command in your CLI. Empty command output indicates that the object is not in a migration operation.oc get Network.config cluster -o jsonpath='{.status.migration}'
$ oc get Network.config cluster -o jsonpath='{.status.migration}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the patch to the
Network.operator.openshift.io
object to set the network plugin back to OpenShift SDN by entering the following command in your CLI:oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": { "networkType": "OpenShiftSDN" } } }'
$ oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": { "networkType": "OpenShiftSDN" } } }'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf you applied the patch to the
Network.config.openshift.io
object before the patch operation finalizes on theNetwork.operator.openshift.io
object, the Cluster Network Operator (CNO) enters into a degradation state and this causes a slight delay until the CNO recovers from the degraded state.Confirm that the migration status of the network plugin for the
Network.config.openshift.io cluster
object isOpenShiftSDN
by entering the following command in your CLI:oc get Network.config cluster -o jsonpath='{.status.migration.networkType}'
$ oc get Network.config cluster -o jsonpath='{.status.migration.networkType}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the patch to the
Network.config.openshift.io
object to set the network plugin back to OpenShift SDN by entering the following command in your CLI:oc patch Network.config.openshift.io cluster --type='merge' \ --patch '{ "spec": { "networkType": "OpenShiftSDN" } }'
$ oc patch Network.config.openshift.io cluster --type='merge' \ --patch '{ "spec": { "networkType": "OpenShiftSDN" } }'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Disable automatic migration of several OVN-Kubernetes capabilities to the OpenShift SDN equivalents:
- Egress IPs
- Egress firewall
- Multicast
To disable automatic migration of the configuration for any of the previously noted OpenShift SDN features, specify the following keys:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
bool
: Specifies whether to enable migration of the feature. The default istrue
.Optional: You can customize the following settings for OpenShift SDN to meet your network infrastructure requirements:
- Maximum transmission unit (MTU)
- VXLAN port
To customize either or both of the previously noted settings, customize and enter the following command in your CLI. If you do not need to change the default value, omit the key from the patch.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow mtu
-
The MTU for the VXLAN overlay network. This value is normally configured automatically, but if the nodes in your cluster do not all use the same MTU, then you must set this explicitly to
50
less than the smallest node MTU value. port
-
The UDP port for the VXLAN overlay network. If a value is not specified, the default is
4789
. The port cannot be the same as the Geneve port that is used by OVN-Kubernetes. The default value for the Geneve port is6081
.
Example patch command
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Reboot each node in your cluster. You can reboot the nodes in your cluster with either of the following approaches:
With the
oc rsh
command, you can use a bash script similar to the following:Copy to Clipboard Copied! Toggle word wrap Toggle overflow With the
ssh
command, you can use a bash script similar to the following. The script assumes that you have configured sudo to not prompt for a password.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Wait until the Multus daemon set rollout completes. Run the following command to see your rollout status:
oc -n openshift-multus rollout status daemonset/multus
$ oc -n openshift-multus rollout status daemonset/multus
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The name of the Multus pods is in the form of
multus-<xxxxx>
where<xxxxx>
is a random sequence of letters. It might take several moments for the pods to restart.Example output
Waiting for daemon set "multus" rollout to finish: 1 out of 6 new pods have been updated... ... Waiting for daemon set "multus" rollout to finish: 5 of 6 updated pods are available... daemon set "multus" successfully rolled out
Waiting for daemon set "multus" rollout to finish: 1 out of 6 new pods have been updated... ... Waiting for daemon set "multus" rollout to finish: 5 of 6 updated pods are available... daemon set "multus" successfully rolled out
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the nodes in your cluster have rebooted and the multus pods are rolled out, start all of the machine configuration pools by running the following commands::
Start the master configuration pool:
oc patch MachineConfigPool master --type='merge' --patch \ '{ "spec": { "paused": false } }'
$ oc patch MachineConfigPool master --type='merge' --patch \ '{ "spec": { "paused": false } }'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the worker configuration pool:
oc patch MachineConfigPool worker --type='merge' --patch \ '{ "spec": { "paused": false } }'
$ oc patch MachineConfigPool worker --type='merge' --patch \ '{ "spec": { "paused": false } }'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
As the MCO updates machines in each config pool, it reboots each node.
By default the MCO updates a single machine per pool at a time, so the time that the migration requires to complete grows with the size of the cluster.
Confirm the status of the new machine configuration on the hosts:
To list the machine configuration state and the name of the applied machine configuration, enter the following command in your CLI:
oc describe node | egrep "hostname|machineconfig"
$ oc describe node | egrep "hostname|machineconfig"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done
kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the following statements are true:
-
The value of
machineconfiguration.openshift.io/state
field isDone
. -
The value of the
machineconfiguration.openshift.io/currentConfig
field is equal to the value of themachineconfiguration.openshift.io/desiredConfig
field.
-
The value of
To confirm that the machine config is correct, enter the following command in your CLI:
oc get machineconfig <config_name> -o yaml
$ oc get machineconfig <config_name> -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where
<config_name>
is the name of the machine config from themachineconfiguration.openshift.io/currentConfig
field.
Confirm that the migration succeeded:
To confirm that the network plugin is OpenShift SDN, enter the following command in your CLI. The value of
status.networkType
must beOpenShiftSDN
.oc get Network.config/cluster -o jsonpath='{.status.networkType}{"\n"}'
$ oc get Network.config/cluster -o jsonpath='{.status.networkType}{"\n"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To confirm that the cluster nodes are in the
Ready
state, enter the following command in your CLI:oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If a node is stuck in the
NotReady
state, investigate the machine config daemon pod logs and resolve any errors.To list the pods, enter the following command in your CLI:
oc get pod -n openshift-machine-config-operator
$ oc get pod -n openshift-machine-config-operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The names for the config daemon pods are in the following format:
machine-config-daemon-<seq>
. The<seq>
value is a random five character alphanumeric sequence.To display the pod log for each machine config daemon pod shown in the previous output, enter the following command in your CLI:
oc logs <pod> -n openshift-machine-config-operator
$ oc logs <pod> -n openshift-machine-config-operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where
pod
is the name of a machine config daemon pod.- Resolve any errors in the logs shown by the output from the previous command.
To confirm that your pods are not in an error state, enter the following command in your CLI:
oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}'
$ oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If pods on a node are in an error state, reboot that node.
Complete the following steps only if the migration succeeds and your cluster is in a good state:
To remove the migration configuration from the Cluster Network Operator configuration object, enter the following command in your CLI:
oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": null } }'
$ oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": null } }'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To remove the OVN-Kubernetes configuration, enter the following command in your CLI:
oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "defaultNetwork": { "ovnKubernetesConfig":null } } }'
$ oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "defaultNetwork": { "ovnKubernetesConfig":null } } }'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To remove the OVN-Kubernetes network provider namespace, enter the following command in your CLI:
oc delete namespace openshift-ovn-kubernetes
$ oc delete namespace openshift-ovn-kubernetes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
23.6.2. Using an Ansible playbook to roll back to the OpenShift SDN network plugin Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can use the playbooks/playbook-rollback.yml
from the network.offline_migration_sdn_to_ovnk
Ansible collection to roll back from the OVN-Kubernetes plugin to the OpenShift SDN Container Network Interface (CNI) network plugin.
Prerequisites
-
You installed the
python3
package, minimum version 3.10. -
You installed the
jmespath
andjq
packages. - You logged in to the Red Hat Hybrid Cloud Console and opened the Ansible Automation Platform web console.
-
You created a security group rule that allows User Datagram Protocol (UDP) packets on port
6081
for all nodes on all cloud platforms. If you do not do this task, your cluster might fail to schedule pods.
Procedure
Install the
ansible-core
package, minimum version 2.15. The following example command shows how to install theansible-core
package on Red Hat Enterprise Linux (RHEL):sudo dnf install -y ansible-core
$ sudo dnf install -y ansible-core
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an
ansible.cfg
file and add information similar to the following example to the file. Ensure that file exists in the same directory as where theansible-galaxy
commands and the playbooks run.Copy to Clipboard Copied! Toggle word wrap Toggle overflow From the Ansible Automation Platform web console, go to the Connect to Hub page and complete the following steps:
- In the Offline token section of the page, click the Load token button.
- After the token loads, click the Copy to clipboard icon.
-
Open the
ansible.cfg
file and paste the API token in thetoken=
parameter. The API token is required for authenticating against the server URL specified in theansible.cfg
file.
Install the
network.offline_migration_sdn_to_ovnk
Ansible collection by entering the followingansible-galaxy
command:ansible-galaxy collection install network.offline_migration_sdn_to_ovnk
$ ansible-galaxy collection install network.offline_migration_sdn_to_ovnk
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
network.offline_migration_sdn_to_ovnk
Ansible collection is installed on your system:ansible-galaxy collection list | grep network.offline_migration_sdn_to_ovnk
$ ansible-galaxy collection list | grep network.offline_migration_sdn_to_ovnk
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
network.offline_migration_sdn_to_ovnk 1.0.2
network.offline_migration_sdn_to_ovnk 1.0.2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
network.offline_migration_sdn_to_ovnk
Ansible collection is saved in the default path of~/.ansible/collections/ansible_collections/network/offline_migration_sdn_to_ovnk/
.Configure rollback features in the
playbooks/playbook-migration.yml
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow rollback_disable_auto_migration
-
Disables the auto-migration of OVN-Kubernetes plug-in features to the OpenShift SDN CNI plug-in. If you disable auto-migration of features, you must also set the
rollback_egress_ip
,rollback_egress_firewall
, androllback_multicast
parameters tofalse
. If you need to enable auto-migration of features, set the parameter tofalse
. rollback_mtu
- Optional parameter that sets a specific maximum transmission unit (MTU) to your cluster network after the migration process.
rollback_vxlanPort
-
Optional parameter that sets a VXLAN (Virtual Extensible LAN) port for use by OpenShift SDN CNI plug-in. The default value for the parameter is
4790
.
To run the
playbooks/playbook-rollback.yml
file, enter the following command:ansible-playbook -v playbooks/playbook-rollback.yml
$ ansible-playbook -v playbooks/playbook-rollback.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
23.7. Converting to IPv4/IPv6 dual-stack networking Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can convert your IPv4 single-stack cluster to a dual-network cluster network that supports IPv4 and IPv6 address families. After converting to dual-stack, all newly created pods are dual-stack enabled.
A dual-stack network is supported on clusters provisioned on bare metal, IBM Power, IBM Z infrastructure, and single node OpenShift clusters.
While using dual-stack networking, you cannot use IPv4-mapped IPv6 addresses, such as ::FFFF:198.51.100.1
, where IPv6 is required.
23.7.1. Converting to a dual-stack cluster network Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can convert your single-stack cluster network to a dual-stack cluster network.
After converting to dual-stack networking only newly created pods are assigned IPv6 addresses. Any pods created before the conversion must be recreated to receive an IPv6 address.
Before proceeding, make sure your OpenShift cluster uses version 4.12.5 or later. Otherwise, the conversion can fail due to the bug ovnkube node pod crashed after converting to a dual-stack cluster network.
Prerequisites
-
You installed the OpenShift CLI (
oc
). -
You are logged in to the cluster with a user with
cluster-admin
privileges. - Your cluster uses the OVN-Kubernetes network plugin.
- The cluster nodes have IPv6 addresses.
- You have configured an IPv6-enabled router based on your infrastructure.
Procedure
To specify IPv6 address blocks for the cluster and service networks, create a file containing the following YAML:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify an object with the
cidr
andhostPrefix
fields. The host prefix must be64
or greater. The IPv6 CIDR prefix must be large enough to accommodate the specified host prefix. - 2
- Specify an IPv6 CIDR with a prefix of
112
. Kubernetes uses only the lowest 16 bits. For a prefix of112
, IP addresses are assigned from112
to128
bits.
To patch the cluster network configuration, enter the following command:
oc patch network.config.openshift.io cluster \ --type='json' --patch-file <file>.yaml
$ oc patch network.config.openshift.io cluster \ --type='json' --patch-file <file>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
file
Specifies the name of the file you created in the previous step.
Example output
network.config.openshift.io/cluster patched
network.config.openshift.io/cluster patched
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Complete the following step to verify that the cluster network recognizes the IPv6 address blocks that you specified in the previous procedure.
Display the network configuration:
oc describe network
$ oc describe network
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
23.7.2. Converting to a single-stack cluster network Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can convert your dual-stack cluster network to a single-stack cluster network.
If you originally converted your IPv4 single-stack cluster network to a dual-stack cluster, you can convert only back to the IPv4 single-stack cluster and not an IPv6 single-stack cluster network. The same restriction applies for converting back to an IPv6 single-stack cluster network.
Prerequisites
-
You installed the OpenShift CLI (
oc
). -
You are logged in to the cluster with a user with
cluster-admin
privileges. - Your cluster uses the OVN-Kubernetes network plugin.
- The cluster nodes have IPv6 addresses.
- You have enabled dual-stack networking.
Procedure
Edit the
networks.config.openshift.io
custom resource (CR) by running the following command:oc edit networks.config.openshift.io
$ oc edit networks.config.openshift.io
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Remove the IPv4 or IPv6 configuration that you added to the
cidr
and thehostPrefix
parameters from completing the "Converting to a dual-stack cluster network " procedure steps.
23.8. Logging for egress firewall and network policy rules Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can configure audit logging for your cluster and enable logging for one or more namespaces. OpenShift Container Platform produces audit logs for both egress firewalls and network policies.
Audit logging is available for only the OVN-Kubernetes network plugin.
23.8.1. Audit logging Link kopierenLink in die Zwischenablage kopiert!
The OVN-Kubernetes network plugin uses Open Virtual Network (OVN) ACLs to manage egress firewalls and network policies. Audit logging exposes allow and deny ACL events.
You can configure the destination for audit logs, such as a syslog server or a UNIX domain socket. Regardless of any additional configuration, an audit log is always saved to /var/log/ovn/acl-audit-log.log
on each OVN-Kubernetes pod in the cluster.
You can enable audit logging for each namespace by annotating each namespace configuration with a k8s.ovn.org/acl-logging
section. In the k8s.ovn.org/acl-logging
section, you must specify allow
, deny
, or both values to enable audit logging for a namespace.
A network policy does not support setting the Pass
action set as a rule.
The ACL-logging implementation logs access control list (ACL) events for a network. You can view these logs to analyze any potential security issues.
Example namespace annotation
To view the default ACL logging configuration values, see the policyAuditConfig
object in the cluster-network-03-config.yml
file. If required, you can change the ACL logging configuration values for log file parameters in this file.
The logging message format is compatible with syslog as defined by RFC5424. The syslog facility is configurable and defaults to local0
. The following example shows key parameters and their values outputted in a log message:
Example logging message that outputs parameters and their values
<timestamp>|<message_serial>|acl_log(ovn_pinctrl0)|<severity>|name="<acl_name>", verdict="<verdict>", severity="<severity>", direction="<direction>": <flow>
<timestamp>|<message_serial>|acl_log(ovn_pinctrl0)|<severity>|name="<acl_name>", verdict="<verdict>", severity="<severity>", direction="<direction>": <flow>
Where:
-
<timestamp>
states the time and date for the creation of a log message. -
<message_serial>
lists the serial number for a log message. -
acl_log(ovn_pinctrl0)
is a literal string that prints the location of the log message in the OVN-Kubernetes plugin. -
<severity>
sets the severity level for a log message. If you enable audit logging that supportsallow
anddeny
tasks then two severity levels show in the log message output. -
<name>
states the name of the ACL-logging implementation in the OVN Network Bridging Database (nbdb
) that was created by the network policy. -
<verdict>
can be eitherallow
ordrop
. -
<direction>
can be eitherto-lport
orfrom-lport
to indicate that the policy was applied to traffic going to or away from a pod. -
<flow>
shows packet information in a format equivalent to theOpenFlow
protocol. This parameter comprises Open vSwitch (OVS) fields.
The following example shows OVS fields that the flow
parameter uses to extract packet information from system memory:
Example of OVS fields used by the flow
parameter to extract packet information
<proto>,vlan_tci=0x0000,dl_src=<src_mac>,dl_dst=<source_mac>,nw_src=<source_ip>,nw_dst=<target_ip>,nw_tos=<tos_dscp>,nw_ecn=<tos_ecn>,nw_ttl=<ip_ttl>,nw_frag=<fragment>,tp_src=<tcp_src_port>,tp_dst=<tcp_dst_port>,tcp_flags=<tcp_flags>
<proto>,vlan_tci=0x0000,dl_src=<src_mac>,dl_dst=<source_mac>,nw_src=<source_ip>,nw_dst=<target_ip>,nw_tos=<tos_dscp>,nw_ecn=<tos_ecn>,nw_ttl=<ip_ttl>,nw_frag=<fragment>,tp_src=<tcp_src_port>,tp_dst=<tcp_dst_port>,tcp_flags=<tcp_flags>
Where:
-
<proto>
states the protocol. Valid values aretcp
andudp
. -
vlan_tci=0x0000
states the VLAN header as0
because a VLAN ID is not set for internal pod network traffic. -
<src_mac>
specifies the source for the Media Access Control (MAC) address. -
<source_mac>
specifies the destination for the MAC address. -
<source_ip>
lists the source IP address -
<target_ip>
lists the target IP address. -
<tos_dscp>
states Differentiated Services Code Point (DSCP) values to classify and prioritize certain network traffic over other traffic. -
<tos_ecn>
states Explicit Congestion Notification (ECN) values that indicate any congested traffic in your network. -
<ip_ttl>
states the Time To Live (TTP) information for an packet. -
<fragment>
specifies what type of IP fragments or IP non-fragments to match. -
<tcp_src_port>
shows the source for the port for TCP and UDP protocols. -
<tcp_dst_port>
lists the destination port for TCP and UDP protocols. -
<tcp_flags>
supports numerous flags such asSYN
,ACK
,PSH
and so on. If you need to set multiple values then each value is separated by a vertical bar (|
). The UDP protocol does not support this parameter.
For more information about the previous field descriptions, go to the OVS manual page for ovs-fields
.
Example ACL deny log entry for a network policy
2021-06-13T19:33:11.590Z|00005|acl_log(ovn_pinctrl0)|INFO|name="verify-audit-logging_deny-all", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0
2021-06-13T19:33:11.590Z|00005|acl_log(ovn_pinctrl0)|INFO|name="verify-audit-logging_deny-all", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0
The following table describes namespace annotation values:
Field | Description |
---|---|
|
Blocks namespace access to any traffic that matches an ACL rule with the |
|
Permits namespace access to any traffic that matches an ACL rule with the |
|
A |
23.8.2. Audit configuration Link kopierenLink in die Zwischenablage kopiert!
The configuration for audit logging is specified as part of the OVN-Kubernetes cluster network provider configuration. The following YAML illustrates the default values for the audit logging:
Audit logging configuration
The following table describes the configuration fields for audit logging.
Field | Type | Description |
---|---|---|
| integer |
The maximum number of messages to generate every second per node. The default value is |
| integer |
The maximum size for the audit log in bytes. The default value is |
| string | One of the following additional audit log targets:
|
| string |
The syslog facility, such as |
23.8.3. Configuring egress firewall and network policy auditing for a cluster Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can customize audit logging for your cluster.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in to the cluster with a user with
cluster-admin
privileges.
Procedure
To customize the audit logging configuration, enter the following command:
oc edit network.operator.openshift.io/cluster
$ oc edit network.operator.openshift.io/cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipYou can alternatively customize and apply the following YAML to configure audit logging:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To create a namespace with network policies complete the following steps:
Create a namespace for verification:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
namespace/verify-audit-logging created
namespace/verify-audit-logging created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable audit logging:
oc annotate namespace verify-audit-logging k8s.ovn.org/acl-logging='{ "deny": "alert", "allow": "alert" }'
$ oc annotate namespace verify-audit-logging k8s.ovn.org/acl-logging='{ "deny": "alert", "allow": "alert" }'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow namespace/verify-audit-logging annotated
namespace/verify-audit-logging annotated
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create network policies for the namespace:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
networkpolicy.networking.k8s.io/deny-all created networkpolicy.networking.k8s.io/allow-from-same-namespace created
networkpolicy.networking.k8s.io/deny-all created networkpolicy.networking.k8s.io/allow-from-same-namespace created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a pod for source traffic in the
default
namespace:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create two pods in the
verify-audit-logging
namespace:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
pod/client created pod/server created
pod/client created pod/server created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To generate traffic and produce network policy audit log entries, complete the following steps:
Obtain the IP address for pod named
server
in theverify-audit-logging
namespace:POD_IP=$(oc get pods server -n verify-audit-logging -o jsonpath='{.status.podIP}')
$ POD_IP=$(oc get pods server -n verify-audit-logging -o jsonpath='{.status.podIP}')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ping the IP address from the previous command from the pod named
client
in thedefault
namespace and confirm that all packets are dropped:oc exec -it client -n default -- /bin/ping -c 2 $POD_IP
$ oc exec -it client -n default -- /bin/ping -c 2 $POD_IP
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
PING 10.128.2.55 (10.128.2.55) 56(84) bytes of data. --- 10.128.2.55 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 2041ms
PING 10.128.2.55 (10.128.2.55) 56(84) bytes of data. --- 10.128.2.55 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 2041ms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ping the IP address saved in the
POD_IP
shell environment variable from the pod namedclient
in theverify-audit-logging
namespace and confirm that all packets are allowed:oc exec -it client -n verify-audit-logging -- /bin/ping -c 2 $POD_IP
$ oc exec -it client -n verify-audit-logging -- /bin/ping -c 2 $POD_IP
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Display the latest entries in the network policy audit log:
for pod in $(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print $1 }') ; do oc exec -it $pod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done
$ for pod in $(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print $1 }') ; do oc exec -it $pod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
23.8.4. Enabling egress firewall and network policy audit logging for a namespace Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can enable audit logging for a namespace.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in to the cluster with a user with
cluster-admin
privileges.
Procedure
To enable audit logging for a namespace, enter the following command:
oc annotate namespace <namespace> \ k8s.ovn.org/acl-logging='{ "deny": "alert", "allow": "notice" }'
$ oc annotate namespace <namespace> \ k8s.ovn.org/acl-logging='{ "deny": "alert", "allow": "notice" }'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<namespace>
- Specifies the name of the namespace.
TipYou can alternatively apply the following YAML to enable audit logging:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
namespace/verify-audit-logging annotated
namespace/verify-audit-logging annotated
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Display the latest entries in the audit log:
for pod in $(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print $1 }') ; do oc exec -it $pod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done
$ for pod in $(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print $1 }') ; do oc exec -it $pod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
2021-06-13T19:33:11.590Z|00005|acl_log(ovn_pinctrl0)|INFO|name="verify-audit-logging_deny-all", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0
2021-06-13T19:33:11.590Z|00005|acl_log(ovn_pinctrl0)|INFO|name="verify-audit-logging_deny-all", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
23.8.5. Disabling egress firewall and network policy audit logging for a namespace Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can disable audit logging for a namespace.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in to the cluster with a user with
cluster-admin
privileges.
Procedure
To disable audit logging for a namespace, enter the following command:
oc annotate --overwrite namespace <namespace> k8s.ovn.org/acl-logging-
$ oc annotate --overwrite namespace <namespace> k8s.ovn.org/acl-logging-
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<namespace>
- Specifies the name of the namespace.
TipYou can alternatively apply the following YAML to disable audit logging:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
namespace/verify-audit-logging annotated
namespace/verify-audit-logging annotated
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
23.9. Configuring IPsec encryption Link kopierenLink in die Zwischenablage kopiert!
With IPsec enabled, all pod-to-pod network traffic between nodes on the OVN-Kubernetes cluster network is encrypted with IPsec Transport mode.
IPsec is disabled by default. It can be enabled either during or after installing the cluster. For information about cluster installation, see OpenShift Container Platform installation overview. If you need to enable IPsec after cluster installation, you must first resize your cluster MTU to account for the overhead of the IPsec ESP IP header.
The following support limitations exist for IPsec on a OpenShift Container Platform cluster:
- You must disable IPsec before updating to OpenShift Container Platform 4.15. After disabling IPsec, you must also delete the associated IPsec daemonsets. There is a known issue that can cause interruptions in pod-to-pod communication if you update without disabling IPsec. (OCPBUGS-43323)
The following documentation describes how to enable and disable IPSec after cluster installation.
23.9.1. Prerequisites Link kopierenLink in die Zwischenablage kopiert!
-
You have decreased the size of the cluster MTU by
46
bytes to allow for the additional overhead of the IPsec ESP header. For more information on resizing the MTU that your cluster uses, see Changing the MTU for the cluster network.
23.9.2. Types of network traffic flows encrypted by IPsec Link kopierenLink in die Zwischenablage kopiert!
With IPsec enabled, only the following network traffic flows between pods are encrypted:
- Traffic between pods on different nodes on the cluster network
- Traffic from a pod on the host network to a pod on the cluster network
The following traffic flows are not encrypted:
- Traffic between pods on the same node on the cluster network
- Traffic between pods on the host network
- Traffic from a pod on the cluster network to a pod on the host network
The encrypted and unencrypted flows are illustrated in the following diagram:
23.9.2.1. Network connectivity requirements when IPsec is enabled Link kopierenLink in die Zwischenablage kopiert!
You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster.
Protocol | Port | Description |
---|---|---|
UDP |
| IPsec IKE packets |
| IPsec NAT-T packets | |
ESP | N/A | IPsec Encapsulating Security Payload (ESP) |
23.9.3. Encryption protocol and IPsec mode Link kopierenLink in die Zwischenablage kopiert!
The encrypt cipher used is AES-GCM-16-256
. The integrity check value (ICV) is 16
bytes. The key length is 256
bits.
The IPsec mode used is Transport mode, a mode that encrypts end-to-end communication by adding an Encapsulated Security Payload (ESP) header to the IP header of the original packet and encrypts the packet data. OpenShift Container Platform does not currently use or support IPsec Tunnel mode for pod-to-pod communication.
23.9.4. Security certificate generation and rotation Link kopierenLink in die Zwischenablage kopiert!
The Cluster Network Operator (CNO) generates a self-signed X.509 certificate authority (CA) that is used by IPsec for encryption. Certificate signing requests (CSRs) from each node are automatically fulfilled by the CNO.
The CA is valid for 10 years. The individual node certificates are valid for 5 years and are automatically rotated after 4 1/2 years elapse.
23.9.5. Enabling IPsec encryption Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can enable IPsec encryption after cluster installation.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in to the cluster as a user with
cluster-admin
privileges. -
You have reduced the size of your cluster maximum transmission unit (MTU) by
46
bytes to allow for the overhead of the IPsec ESP header.
Procedure
To enable IPsec encryption, enter the following command:
oc patch networks.operator.openshift.io cluster --type=merge \ -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"ipsecConfig":{ }}}}}'
$ oc patch networks.operator.openshift.io cluster --type=merge \ -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"ipsecConfig":{ }}}}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To find the names of the OVN-Kubernetes control plane pods, enter the following command:
oc get pods -l app=ovnkube-master -n openshift-ovn-kubernetes
$ oc get pods -l app=ovnkube-master -n openshift-ovn-kubernetes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE ovnkube-master-fvtnh 6/6 Running 0 122m ovnkube-master-hsgmm 6/6 Running 0 122m ovnkube-master-qcmdc 6/6 Running 0 122m
NAME READY STATUS RESTARTS AGE ovnkube-master-fvtnh 6/6 Running 0 122m ovnkube-master-hsgmm 6/6 Running 0 122m ovnkube-master-qcmdc 6/6 Running 0 122m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that IPsec is enabled on your cluster by entering the following command. The command output must state
true
to indicate that the node has IPsec enabled.oc -n openshift-ovn-kubernetes rsh ovnkube-master-<pod_number_sequence> \ ovn-nbctl --no-leader-only get nb_global . ipsec
$ oc -n openshift-ovn-kubernetes rsh ovnkube-master-<pod_number_sequence> \
1 ovn-nbctl --no-leader-only get nb_global . ipsec
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<pod_number_sequence>
with the random sequence of letters,fvtnh
, for a data plane pod from the previous step.
23.9.6. Disabling IPsec encryption Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can disable IPsec encryption only if you enabled IPsec after cluster installation.
After disabling IPsec, you must delete the associated IPsec daemonsets pods. If you do not delete these pods, you might experience issues with your cluster.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in to the cluster with a user with
cluster-admin
privileges.
Procedure
To disable IPsec encryption, enter the following command:
oc patch networks.operator.openshift.io/cluster --type=json \ -p='[{"op":"remove", "path":"/spec/defaultNetwork/ovnKubernetesConfig/ipsecConfig"}]'
$ oc patch networks.operator.openshift.io/cluster --type=json \ -p='[{"op":"remove", "path":"/spec/defaultNetwork/ovnKubernetesConfig/ipsecConfig"}]'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To find the name of the OVN-Kubernetes data plane pod that exists on the
master
node in your cluster, enter the following command:oc get pods -n openshift-ovn-kubernetes -l=app=ovnkube-master
$ oc get pods -n openshift-ovn-kubernetes -l=app=ovnkube-master
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
ovnkube-master-5xqbf 8/8 Running 0 28m ...
ovnkube-master-5xqbf 8/8 Running 0 28m ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
master
node in your cluster has IPsec disabled by entering the following command. The command output must statefalse
to indicate that the node has IPsec disabled.oc -n openshift-ovn-kubernetes -c nbdb rsh ovnkube-master-<pod_number_sequence> \ ovn-nbctl --no-leader-only get nb_global . ipsec
$ oc -n openshift-ovn-kubernetes -c nbdb rsh ovnkube-master-<pod_number_sequence> \
1 ovn-nbctl --no-leader-only get nb_global . ipsec
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<pod_number_sequence>
with the random sequence of letters, such as5xqbf
, for the data plane pod from the previous step.
To remove the IPsec
ovn-ipsec
daemonset pod from theopenshift-ovn-kubernetes
namespace on the node, enter the following command:oc delete daemonset ovn-ipsec -n openshift-ovn-kubernetes
$ oc delete daemonset ovn-ipsec -n openshift-ovn-kubernetes
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
ovn-ipsec
daemonset configures IPsec connections for east-west traffic on the node.
Verify that the
ovn-ipsec
daemonset pod was removed from the all nodes in your cluster by entering the following command. If the command output does not list the pod, the removal operation is successful.oc get pods -n openshift-ovn-kubernetes -l=app=ovn-ipsec
$ oc get pods -n openshift-ovn-kubernetes -l=app=ovn-ipsec
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou might need to re-run the command for deleting the pod because sometimes the initial command attempt might not delete the pod.
-
Optional: You can increase the size of your cluster MTU by
46
bytes because there is no longer any overhead from the IPsec ESP header in IP packets.
23.9.7. Additional resources Link kopierenLink in die Zwischenablage kopiert!
23.10. Configuring an egress firewall for a project Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can create an egress firewall for a project that restricts egress traffic leaving your OpenShift Container Platform cluster.
23.10.1. How an egress firewall works in a project Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can use an egress firewall to limit the external hosts that some or all pods can access from within the cluster. An egress firewall supports the following scenarios:
- A pod can only connect to internal hosts and cannot initiate connections to the public internet.
- A pod can only connect to the public internet and cannot initiate connections to internal hosts that are outside the OpenShift Container Platform cluster.
- A pod cannot reach specified internal subnets or hosts outside the OpenShift Container Platform cluster.
- A pod can connect to only specific external hosts.
For example, you can allow one project access to a specified IP range but deny the same access to a different project. Or you can restrict application developers from updating from Python pip mirrors, and force updates to come only from approved sources.
Egress firewall does not apply to the host network namespace. Pods with host networking enabled are unaffected by egress firewall rules.
You configure an egress firewall policy by creating an EgressFirewall custom resource (CR) object. The egress firewall matches network traffic that meets any of the following criteria:
- An IP address range in CIDR format
- A DNS name that resolves to an IP address
- A port number
- A protocol that is one of the following protocols: TCP, UDP, and SCTP
If your egress firewall includes a deny rule for 0.0.0.0/0
, access to your OpenShift Container Platform API servers is blocked. You must either add allow rules for each IP address.
The following example illustrates the order of the egress firewall rules necessary to ensure API server access:
To find the IP address for your API servers, run oc get ep kubernetes -n default
.
For more information, see BZ#1988324.
Egress firewall rules do not apply to traffic that goes through routers. Any user with permission to create a Route CR object can bypass egress firewall policy rules by creating a route that points to a forbidden destination.
23.10.1.1. Limitations of an egress firewall Link kopierenLink in die Zwischenablage kopiert!
An egress firewall has the following limitations:
- No project can have more than one EgressFirewall object.
- A maximum of one EgressFirewall object with a maximum of 8,000 rules can be defined per project.
- If you are using the OVN-Kubernetes network plugin with shared gateway mode in Red Hat OpenShift Networking, return ingress replies are affected by egress firewall rules. If the egress firewall rules drop the ingress reply destination IP, the traffic is dropped.
Violating any of these restrictions results in a broken egress firewall for the project. Consequently, all external network traffic is dropped, which can cause security risks for your organization.
An Egress Firewall resource can be created in the kube-node-lease
, kube-public
, kube-system
, openshift
and openshift-
projects.
23.10.1.2. Matching order for egress firewall policy rules Link kopierenLink in die Zwischenablage kopiert!
The egress firewall policy rules are evaluated in the order that they are defined, from first to last. The first rule that matches an egress connection from a pod applies. Any subsequent rules are ignored for that connection.
23.10.1.3. How Domain Name Server (DNS) resolution works Link kopierenLink in die Zwischenablage kopiert!
If you use DNS names in any of your egress firewall policy rules, proper resolution of the domain names is subject to the following restrictions:
- Domain name updates are polled based on a time-to-live (TTL) duration. By default, the duration is 30 minutes. When the egress firewall controller queries the local name servers for a domain name, if the response includes a TTL and the TTL is less than 30 minutes, the controller sets the duration for that DNS name to the returned value. Each DNS name is queried after the TTL for the DNS record expires.
- The pod must resolve the domain from the same local name servers when necessary. Otherwise the IP addresses for the domain known by the egress firewall controller and the pod can be different. If the IP addresses for a hostname differ, the egress firewall might not be enforced consistently.
- Because the egress firewall controller and pods asynchronously poll the same local name server, the pod might obtain the updated IP address before the egress controller does, which causes a race condition. Due to this current limitation, domain name usage in EgressFirewall objects is only recommended for domains with infrequent IP address changes.
The egress firewall always allows pods access to the external interface of the node that the pod is on for DNS resolution.
If you use domain names in your egress firewall policy and your DNS resolution is not handled by a DNS server on the local node, then you must add egress firewall rules that allow access to your DNS server’s IP addresses. if you are using domain names in your pods.
23.10.2. EgressFirewall custom resource (CR) object Link kopierenLink in die Zwischenablage kopiert!
You can define one or more rules for an egress firewall. A rule is either an Allow
rule or a Deny
rule, with a specification for the traffic that the rule applies to.
The following YAML describes an EgressFirewall CR object:
EgressFirewall object
23.10.2.1. EgressFirewall rules Link kopierenLink in die Zwischenablage kopiert!
The following YAML describes an egress firewall rule object. The user can select either an IP address range in CIDR format or a domain name. The egress
stanza expects an array of one or more objects.
Egress policy rule stanza
- 1
- The type of rule. The value must be either
Allow
orDeny
. - 2
- A stanza describing an egress traffic match rule that specifies the
cidrSelector
field or thednsName
field. You cannot use both fields in the same rule. - 3
- An IP address range in CIDR format.
- 4
- A DNS domain name.
- 5
- Optional: A stanza describing a collection of network ports and protocols for the rule.
Ports stanza
ports: - port: <port> protocol: <protocol>
ports:
- port: <port>
protocol: <protocol>
23.10.2.2. Example EgressFirewall CR objects Link kopierenLink in die Zwischenablage kopiert!
The following example defines several egress firewall policy rules:
- 1
- A collection of egress firewall policy rule objects.
The following example defines a policy rule that denies traffic to the host at the 172.16.1.1/32
IP address, if the traffic is using either the TCP protocol and destination port 80
or any protocol and destination port 443
.
23.10.3. Creating an egress firewall policy object Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can create an egress firewall policy object for a project.
If the project already has an EgressFirewall object defined, you must edit the existing policy to make changes to the egress firewall rules.
Prerequisites
- A cluster that uses the OVN-Kubernetes network plugin.
-
Install the OpenShift CLI (
oc
). - You must log in to the cluster as a cluster administrator.
Procedure
Create a policy rule:
-
Create a
<policy_name>.yaml
file where<policy_name>
describes the egress policy rules. - In the file you created, define an egress policy object.
-
Create a
Enter the following command to create the policy object. Replace
<policy_name>
with the name of the policy and<project>
with the project that the rule applies to.oc create -f <policy_name>.yaml -n <project>
$ oc create -f <policy_name>.yaml -n <project>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the following example, a new EgressFirewall object is created in a project named
project1
:oc create -f default.yaml -n project1
$ oc create -f default.yaml -n project1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
egressfirewall.k8s.ovn.org/v1 created
egressfirewall.k8s.ovn.org/v1 created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Optional: Save the
<policy_name>.yaml
file so that you can make changes later.
23.11. Viewing an egress firewall for a project Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can list the names of any existing egress firewalls and view the traffic rules for a specific egress firewall.
23.11.1. Viewing an EgressFirewall object Link kopierenLink in die Zwischenablage kopiert!
You can view an EgressFirewall object in your cluster.
Prerequisites
- A cluster using the OVN-Kubernetes network plugin.
-
Install the OpenShift Command-line Interface (CLI), commonly known as
oc
. - You must log in to the cluster.
Procedure
Optional: To view the names of the EgressFirewall objects defined in your cluster, enter the following command:
oc get egressfirewall --all-namespaces
$ oc get egressfirewall --all-namespaces
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To inspect a policy, enter the following command. Replace
<policy_name>
with the name of the policy to inspect.oc describe egressfirewall <policy_name>
$ oc describe egressfirewall <policy_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
23.12. Editing an egress firewall for a project Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can modify network traffic rules for an existing egress firewall.
23.12.1. Editing an EgressFirewall object Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can update the egress firewall for a project.
Prerequisites
- A cluster using the OVN-Kubernetes network plugin.
-
Install the OpenShift CLI (
oc
). - You must log in to the cluster as a cluster administrator.
Procedure
Find the name of the EgressFirewall object for the project. Replace
<project>
with the name of the project.oc get -n <project> egressfirewall
$ oc get -n <project> egressfirewall
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If you did not save a copy of the EgressFirewall object when you created the egress network firewall, enter the following command to create a copy.
oc get -n <project> egressfirewall <name> -o yaml > <filename>.yaml
$ oc get -n <project> egressfirewall <name> -o yaml > <filename>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<project>
with the name of the project. Replace<name>
with the name of the object. Replace<filename>
with the name of the file to save the YAML to.After making changes to the policy rules, enter the following command to replace the EgressFirewall object. Replace
<filename>
with the name of the file containing the updated EgressFirewall object.oc replace -f <filename>.yaml
$ oc replace -f <filename>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
23.13. Removing an egress firewall from a project Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can remove an egress firewall from a project to remove all restrictions on network traffic from the project that leaves the OpenShift Container Platform cluster.
23.13.1. Removing an EgressFirewall object Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can remove an egress firewall from a project.
Prerequisites
- A cluster using the OVN-Kubernetes network plugin.
-
Install the OpenShift CLI (
oc
). - You must log in to the cluster as a cluster administrator.
Procedure
Find the name of the EgressFirewall object for the project. Replace
<project>
with the name of the project.oc get -n <project> egressfirewall
$ oc get -n <project> egressfirewall
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command to delete the EgressFirewall object. Replace
<project>
with the name of the project and<name>
with the name of the object.oc delete -n <project> egressfirewall <name>
$ oc delete -n <project> egressfirewall <name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
23.14. Configuring an egress IP address Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can configure the OVN-Kubernetes Container Network Interface (CNI) network plugin to assign one or more egress IP addresses to a namespace, or to specific pods in a namespace.
In an installer-provisioned infrastructure cluster, do not assign egress IP addresses to the infrastructure node that already hosts the ingress VIP. For more information, see the Red Hat Knowledgebase solution POD from the egress IP enabled namespace cannot access OCP route in an IPI cluster when the egress IP is assigned to the infra node that already hosts the ingress VIP.
23.14.1. Egress IP address architectural design and implementation Link kopierenLink in die Zwischenablage kopiert!
The OpenShift Container Platform egress IP address functionality allows you to ensure that the traffic from one or more pods in one or more namespaces has a consistent source IP address for services outside the cluster network.
For example, you might have a pod that periodically queries a database that is hosted on a server outside of your cluster. To enforce access requirements for the server, a packet filtering device is configured to allow traffic only from specific IP addresses. To ensure that you can reliably allow access to the server from only that specific pod, you can configure a specific egress IP address for the pod that makes the requests to the server.
An egress IP address assigned to a namespace is different from an egress router, which is used to send traffic to specific destinations.
In some cluster configurations, application pods and ingress router pods run on the same node. If you configure an egress IP address for an application project in this scenario, the IP address is not used when you send a request to a route from the application project.
Egress IP addresses must not be configured in any Linux network configuration files, such as ifcfg-eth0
.
23.14.1.1. Platform support Link kopierenLink in die Zwischenablage kopiert!
Support for the egress IP address functionality on various platforms is summarized in the following table:
Platform | Supported |
---|---|
Bare metal | Yes |
VMware vSphere | Yes |
Red Hat OpenStack Platform (RHOSP) | Yes |
Amazon Web Services (AWS) | Yes |
Google Cloud Platform (GCP) | Yes |
Microsoft Azure | Yes |
The assignment of egress IP addresses to control plane nodes with the EgressIP feature is not supported on a cluster provisioned on Amazon Web Services (AWS). (BZ#2039656)
23.14.1.2. Public cloud platform considerations Link kopierenLink in die Zwischenablage kopiert!
Typically, public cloud providers place a limit on egress IPs. This means that there is a constraint on the absolute number of assignable IP addresses per node for clusters provisioned on public cloud infrastructure. The maximum number of assignable IP addresses per node, or the IP capacity, can be described in the following formula:
IP capacity = public cloud default capacity - sum(current IP assignments)
IP capacity = public cloud default capacity - sum(current IP assignments)
While the Egress IPs capability manages the IP address capacity per node, it is important to plan for this constraint in your deployments. For example, if a public cloud provider limits IP address capacity to 10 IP addresses per node, and you have 8 nodes, the total number of assignable IP addresses is only 80. To achieve a higher IP address capacity, you would need to allocate additional nodes. For example, if you needed 150 assignable IP addresses, you would need to allocate 7 additional nodes.
To confirm the IP capacity and subnets for any node in your public cloud environment, you can enter the oc get node <node_name> -o yaml
command. The cloud.network.openshift.io/egress-ipconfig
annotation includes capacity and subnet information for the node.
The annotation value is an array with a single object with fields that provide the following information for the primary network interface:
-
interface
: Specifies the interface ID on AWS and Azure and the interface name on GCP. -
ifaddr
: Specifies the subnet mask for one or both IP address families. -
capacity
: Specifies the IP address capacity for the node. On AWS, the IP address capacity is provided per IP address family. On Azure and GCP, the IP address capacity includes both IPv4 and IPv6 addresses.
Automatic attachment and detachment of egress IP addresses for traffic between nodes are available. This allows for traffic from many pods in namespaces to have a consistent source IP address to locations outside of the cluster. This also supports OpenShift SDN and OVN-Kubernetes, which is the default networking plugin in Red Hat OpenShift Networking in OpenShift Container Platform 4.12.
The RHOSP egress IP address feature creates a Neutron reservation port called egressip-<IP address>
. Using the same RHOSP user as the one used for the OpenShift Container Platform cluster installation, you can assign a floating IP address to this reservation port to have a predictable SNAT address for egress traffic. When an egress IP address on an RHOSP network is moved from one node to another, because of a node failover, for example, the Neutron reservation port is removed and recreated. This means that the floating IP association is lost and you need to manually reassign the floating IP address to the new reservation port.
When an RHOSP cluster administrator assigns a floating IP to the reservation port, OpenShift Container Platform cannot delete the reservation port. The CloudPrivateIPConfig
object cannot perform delete and move operations until an RHOSP cluster administrator unassigns the floating IP from the reservation port.
The following examples illustrate the annotation from nodes on several public cloud providers. The annotations are indented for readability.
Example cloud.network.openshift.io/egress-ipconfig
annotation on AWS
Example cloud.network.openshift.io/egress-ipconfig
annotation on GCP
The following sections describe the IP address capacity for supported public cloud environments for use in your capacity calculation.
23.14.1.2.1. Amazon Web Services (AWS) IP address capacity limits Link kopierenLink in die Zwischenablage kopiert!
On AWS, constraints on IP address assignments depend on the instance type configured. For more information, see IP addresses per network interface per instance type
23.14.1.2.2. Google Cloud Platform (GCP) IP address capacity limits Link kopierenLink in die Zwischenablage kopiert!
On GCP, the networking model implements additional node IP addresses through IP address aliasing, rather than IP address assignments. However, IP address capacity maps directly to IP aliasing capacity.
The following capacity limits exist for IP aliasing assignment:
- Per node, the maximum number of IP aliases, both IPv4 and IPv6, is 100.
- Per VPC, the maximum number of IP aliases is unspecified, but OpenShift Container Platform scalability testing reveals the maximum to be approximately 15,000.
For more information, see Per instance quotas and Alias IP ranges overview.
23.14.1.2.3. Microsoft Azure IP address capacity limits Link kopierenLink in die Zwischenablage kopiert!
On Azure, the following capacity limits exist for IP address assignment:
- Per NIC, the maximum number of assignable IP addresses, for both IPv4 and IPv6, is 256.
- Per virtual network, the maximum number of assigned IP addresses cannot exceed 65,536.
For more information, see Networking limits.
23.14.1.3. Assignment of egress IPs to pods Link kopierenLink in die Zwischenablage kopiert!
To assign one or more egress IPs to a namespace or specific pods in a namespace, the following conditions must be satisfied:
-
At least one node in your cluster must have the
k8s.ovn.org/egress-assignable: ""
label. -
An
EgressIP
object exists that defines one or more egress IP addresses to use as the source IP address for traffic leaving the cluster from pods in a namespace.
If you create EgressIP
objects prior to labeling any nodes in your cluster for egress IP assignment, OpenShift Container Platform might assign every egress IP address to the first node with the k8s.ovn.org/egress-assignable: ""
label.
To ensure that egress IP addresses are widely distributed across nodes in the cluster, always apply the label to the nodes you intent to host the egress IP addresses before creating any EgressIP
objects.
23.14.1.4. Assignment of egress IPs to nodes Link kopierenLink in die Zwischenablage kopiert!
When creating an EgressIP
object, the following conditions apply to nodes that are labeled with the k8s.ovn.org/egress-assignable: ""
label:
- An egress IP address is never assigned to more than one node at a time.
- An egress IP address is equally balanced between available nodes that can host the egress IP address.
If the
spec.EgressIPs
array in anEgressIP
object specifies more than one IP address, the following conditions apply:- No node will ever host more than one of the specified IP addresses.
- Traffic is balanced roughly equally between the specified IP addresses for a given namespace.
- If a node becomes unavailable, any egress IP addresses assigned to it are automatically reassigned, subject to the previously described conditions.
When a pod matches the selector for multiple EgressIP
objects, there is no guarantee which of the egress IP addresses that are specified in the EgressIP
objects is assigned as the egress IP address for the pod.
Additionally, if an EgressIP
object specifies multiple egress IP addresses, there is no guarantee which of the egress IP addresses might be used. For example, if a pod matches a selector for an EgressIP
object with two egress IP addresses, 10.10.20.1
and 10.10.20.2
, either might be used for each TCP connection or UDP conversation.
23.14.1.5. Architectural diagram of an egress IP address configuration Link kopierenLink in die Zwischenablage kopiert!
The following diagram depicts an egress IP address configuration. The diagram describes four pods in two different namespaces running on three nodes in a cluster. The nodes are assigned IP addresses from the 192.168.126.0/18
CIDR block on the host network.
Both Node 1 and Node 3 are labeled with k8s.ovn.org/egress-assignable: ""
and thus available for the assignment of egress IP addresses.
The dashed lines in the diagram depict the traffic flow from pod1, pod2, and pod3 traveling through the pod network to egress the cluster from Node 1 and Node 3. When an external service receives traffic from any of the pods selected by the example EgressIP
object, the source IP address is either 192.168.126.10
or 192.168.126.102
. The traffic is balanced roughly equally between these two nodes.
The following resources from the diagram are illustrated in detail:
Namespace
objectsThe namespaces are defined in the following manifest:
Namespace objects
Copy to Clipboard Copied! Toggle word wrap Toggle overflow EgressIP
objectThe following
EgressIP
object describes a configuration that selects all pods in any namespace with theenv
label set toprod
. The egress IP addresses for the selected pods are192.168.126.10
and192.168.126.102
.EgressIP
objectCopy to Clipboard Copied! Toggle word wrap Toggle overflow For the configuration in the previous example, OpenShift Container Platform assigns both egress IP addresses to the available nodes. The
status
field reflects whether and where the egress IP addresses are assigned.
23.14.2. EgressIP object Link kopierenLink in die Zwischenablage kopiert!
The following YAML describes the API for the EgressIP
object. The scope of the object is cluster-wide; it is not created in a namespace.
EgressIP selected pods cannot serve as backends for services with externalTrafficPolicy
set to Local
. If you try this configuration, service ingress traffic that targets the pods gets incorrectly rerouted to the egress node that hosts the EgressIP. This situation negatively impacts the handling of incoming service traffic and causes connections to drop. This leads to unavailable and non-functional services.
- 1
- The name for the
EgressIPs
object. - 2
- An array of one or more IP addresses.
- 3
- One or more selectors for the namespaces to associate the egress IP addresses with.
- 4
- Optional: One or more selectors for pods in the specified namespaces to associate egress IP addresses with. Applying these selectors allows for the selection of a subset of pods within a namespace.
The following YAML describes the stanza for the namespace selector:
Namespace selector stanza
namespaceSelector: matchLabels: <label_name>: <label_value>
namespaceSelector:
matchLabels:
<label_name>: <label_value>
- 1
- One or more matching rules for namespaces. If more than one match rule is provided, all matching namespaces are selected.
The following YAML describes the optional stanza for the pod selector:
Pod selector stanza
podSelector: matchLabels: <label_name>: <label_value>
podSelector:
matchLabels:
<label_name>: <label_value>
- 1
- Optional: One or more matching rules for pods in the namespaces that match the specified
namespaceSelector
rules. If specified, only pods that match are selected. Others pods in the namespace are not selected.
In the following example, the EgressIP
object associates the 192.168.126.11
and 192.168.126.102
egress IP addresses with pods that have the app
label set to web
and are in the namespaces that have the env
label set to prod
:
Example EgressIP
object
In the following example, the EgressIP
object associates the 192.168.127.30
and 192.168.127.40
egress IP addresses with any pods that do not have the environment
label set to development
:
Example EgressIP
object
23.14.3. The egressIPConfig object Link kopierenLink in die Zwischenablage kopiert!
As a feature of egress IP, the reachabilityTotalTimeoutSeconds
parameter configures the EgressIP node reachability check total timeout in seconds. If the EgressIP node cannot be reached within this timeout, the node is declared down.
You can set a value for the reachabilityTotalTimeoutSeconds
in the configuration file for the egressIPConfig
object. Setting a large value might cause the EgressIP implementation to react slowly to node changes. The implementation reacts slowly for EgressIP nodes that have an issue and are unreachable.
If you omit the reachabilityTotalTimeoutSeconds
parameter from the egressIPConfig
object, the platform chooses a reasonable default value, which is subject to change over time. The current default is 1
second. A value of 0
disables the reachability check for the EgressIP node.
The following egressIPConfig
object describes changing the reachabilityTotalTimeoutSeconds
from the default 1
second probes to 5
second probes:
- 1
- The
egressIPConfig
holds the configurations for the options of theEgressIP
object. By changing these configurations, you can extend theEgressIP
object. - 2
- The value for
reachabilityTotalTimeoutSeconds
accepts integer values from0
to60
. A value of0
disables the reachability check of the egressIP node. Setting a value from1
to60
corresponds to the timeout in seconds for a probe to send the reachability check to the node.
23.14.4. Labeling a node to host egress IP addresses Link kopierenLink in die Zwischenablage kopiert!
You can apply the k8s.ovn.org/egress-assignable=""
label to a node in your cluster so that OpenShift Container Platform can assign one or more egress IP addresses to the node.
Prerequisites
-
Install the OpenShift CLI (
oc
). - Log in to the cluster as a cluster administrator.
Procedure
To label a node so that it can host one or more egress IP addresses, enter the following command:
oc label nodes <node_name> k8s.ovn.org/egress-assignable=""
$ oc label nodes <node_name> k8s.ovn.org/egress-assignable=""
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the node to label.
TipYou can alternatively apply the following YAML to add the label to a node:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
23.14.5. Next steps Link kopierenLink in die Zwischenablage kopiert!
23.15. Assigning an egress IP address Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can assign an egress IP address for traffic leaving the cluster from a namespace or from specific pods in a namespace.
23.15.1. Assigning an egress IP address to a namespace Link kopierenLink in die Zwischenablage kopiert!
You can assign one or more egress IP addresses to a namespace or to specific pods in a namespace.
Prerequisites
-
Install the OpenShift CLI (
oc
). - Log in to the cluster as a cluster administrator.
- Configure at least one node to host an egress IP address.
Procedure
Create an
EgressIP
object:-
Create a
<egressips_name>.yaml
file where<egressips_name>
is the name of the object. In the file that you created, define an
EgressIP
object, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Create a
To create the object, enter the following command.
oc apply -f <egressips_name>.yaml
$ oc apply -f <egressips_name>.yaml
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<egressips_name>
with the name of the object.
Example output
egressips.k8s.ovn.org/<egressips_name> created
egressips.k8s.ovn.org/<egressips_name> created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Optional: Store the
<egressips_name>.yaml
file so that you can make changes later. Add labels to the namespace that requires egress IP addresses. To add a label to the namespace of an
EgressIP
object defined in step 1, run the following command:oc label ns <namespace> env=qa
$ oc label ns <namespace> env=qa
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<namespace>
with the namespace that requires egress IP addresses.
Verification
To show all egress IPs that are in use in your cluster, enter the following command:
oc get egressip -o yaml
$ oc get egressip -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe command
oc get egressip
only returns one egress IP address regardless of how many are configured. This is not a bug and is a limitation of Kubernetes. As a workaround, you can pass in the-o yaml
or-o json
flags to return all egress IPs addresses in use.Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
23.16. Considerations for the use of an egress router pod Link kopierenLink in die Zwischenablage kopiert!
23.16.1. About an egress router pod Link kopierenLink in die Zwischenablage kopiert!
The OpenShift Container Platform egress router pod redirects traffic to a specified remote server from a private source IP address that is not used for any other purpose. An egress router pod can send network traffic to servers that are set up to allow access only from specific IP addresses.
The egress router pod is not intended for every outgoing connection. Creating large numbers of egress router pods can exceed the limits of your network hardware. For example, creating an egress router pod for every project or application could exceed the number of local MAC addresses that the network interface can handle before reverting to filtering MAC addresses in software.
The egress router image is not compatible with Amazon AWS, Azure Cloud, or any other cloud platform that does not support layer 2 manipulations due to their incompatibility with macvlan traffic.
23.16.1.1. Egress router modes Link kopierenLink in die Zwischenablage kopiert!
In redirect mode, an egress router pod configures iptables
rules to redirect traffic from its own IP address to one or more destination IP addresses. Client pods that need to use the reserved source IP address must be configured to access the service for the egress router rather than connecting directly to the destination IP. You can access the destination service and port from the application pod by using the curl
command. For example:
curl <router_service_IP> <port>
$ curl <router_service_IP> <port>
The egress router CNI plugin supports redirect mode only. This is a difference with the egress router implementation that you can deploy with OpenShift SDN. Unlike the egress router for OpenShift SDN, the egress router CNI plugin does not support HTTP proxy mode or DNS proxy mode.
23.16.1.2. Egress router pod implementation Link kopierenLink in die Zwischenablage kopiert!
The egress router implementation uses the egress router Container Network Interface (CNI) plugin. The plugin adds a secondary network interface to a pod.
An egress router is a pod that has two network interfaces. For example, the pod can have eth0
and net1
network interfaces. The eth0
interface is on the cluster network and the pod continues to use the interface for ordinary cluster-related network traffic. The net1
interface is on a secondary network and has an IP address and gateway for that network. Other pods in the OpenShift Container Platform cluster can access the egress router service and the service enables the pods to access external services. The egress router acts as a bridge between pods and an external system.
Traffic that leaves the egress router exits through a node, but the packets have the MAC address of the net1
interface from the egress router pod.
When you add an egress router custom resource, the Cluster Network Operator creates the following objects:
-
The network attachment definition for the
net1
secondary network interface of the pod. - A deployment for the egress router.
If you delete an egress router custom resource, the Operator deletes the two objects in the preceding list that are associated with the egress router.
23.16.1.3. Deployment considerations Link kopierenLink in die Zwischenablage kopiert!
An egress router pod adds an additional IP address and MAC address to the primary network interface of the node. As a result, you might need to configure your hypervisor or cloud provider to allow the additional address.
- Red Hat OpenStack Platform (RHOSP)
If you deploy OpenShift Container Platform on RHOSP, you must allow traffic from the IP and MAC addresses of the egress router pod on your OpenStack environment. If you do not allow the traffic, then communication will fail:
openstack port set --allowed-address \ ip_address=<ip_address>,mac_address=<mac_address> <neutron_port_uuid>
$ openstack port set --allowed-address \ ip_address=<ip_address>,mac_address=<mac_address> <neutron_port_uuid>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Red Hat Virtualization (RHV)
- If you are using RHV, you must select No Network Filter for the Virtual network interface controller (vNIC).
- VMware vSphere
- If you are using VMware vSphere, see the VMware documentation for securing vSphere standard switches. View and change VMware vSphere default settings by selecting the host virtual switch from the vSphere Web Client.
Specifically, ensure that the following are enabled:
23.16.1.4. Failover configuration Link kopierenLink in die Zwischenablage kopiert!
To avoid downtime, the Cluster Network Operator deploys the egress router pod as a deployment resource. The deployment name is egress-router-cni-deployment
. The pod that corresponds to the deployment has a label of app=egress-router-cni
.
To create a new service for the deployment, use the oc expose deployment/egress-router-cni-deployment --port <port_number>
command or create a file like the following example:
23.17. Deploying an egress router pod in redirect mode Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can deploy an egress router pod to redirect traffic to specified destination IP addresses from a reserved source IP address.
The egress router implementation uses the egress router Container Network Interface (CNI) plugin.
23.17.1. Egress router custom resource Link kopierenLink in die Zwischenablage kopiert!
Define the configuration for an egress router pod in an egress router custom resource. The following YAML describes the fields for the configuration of an egress router in redirect mode:
<.> Optional: The namespace
field specifies the namespace to create the egress router in. If you do not specify a value in the file or on the command line, the default
namespace is used.
<.> The addresses
field specifies the IP addresses to configure on the secondary network interface.
<.> The ip
field specifies the reserved source IP address and netmask from the physical network that the node is on to use with egress router pod. Use CIDR notation to specify the IP address and netmask.
<.> The gateway
field specifies the IP address of the network gateway.
<.> Optional: The redirectRules
field specifies a combination of egress destination IP address, egress router port, and protocol. Incoming connections to the egress router on the specified port and protocol are routed to the destination IP address.
<.> Optional: The targetPort
field specifies the network port on the destination IP address. If this field is not specified, traffic is routed to the same network port that it arrived on.
<.> The protocol
field supports TCP, UDP, or SCTP.
<.> Optional: The fallbackIP
field specifies a destination IP address. If you do not specify any redirect rules, the egress router sends all traffic to this fallback IP address. If you specify redirect rules, any connections to network ports that are not defined in the rules are sent by the egress router to this fallback IP address. If you do not specify this field, the egress router rejects connections to network ports that are not defined in the rules.
Example egress router specification
23.17.2. Deploying an egress router in redirect mode Link kopierenLink in die Zwischenablage kopiert!
You can deploy an egress router to redirect traffic from its own reserved source IP address to one or more destination IP addresses.
After you add an egress router, the client pods that need to use the reserved source IP address must be modified to connect to the egress router rather than connecting directly to the destination IP.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in as a user with
cluster-admin
privileges.
Procedure
- Create an egress router definition.
To ensure that other pods can find the IP address of the egress router pod, create a service that uses the egress router, as in the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow <.> Specify the label for the egress router. The value shown is added by the Cluster Network Operator and is not configurable.
After you create the service, your pods can connect to the service. The egress router pod redirects traffic to the corresponding port on the destination IP address. The connections originate from the reserved source IP address.
Verification
To verify that the Cluster Network Operator started the egress router, complete the following procedure:
View the network attachment definition that the Operator created for the egress router:
oc get network-attachment-definition egress-router-cni-nad
$ oc get network-attachment-definition egress-router-cni-nad
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The name of the network attachment definition is not configurable.
Example output
NAME AGE egress-router-cni-nad 18m
NAME AGE egress-router-cni-nad 18m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the deployment for the egress router pod:
oc get deployment egress-router-cni-deployment
$ oc get deployment egress-router-cni-deployment
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The name of the deployment is not configurable.
Example output
NAME READY UP-TO-DATE AVAILABLE AGE egress-router-cni-deployment 1/1 1 1 18m
NAME READY UP-TO-DATE AVAILABLE AGE egress-router-cni-deployment 1/1 1 1 18m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the status of the egress router pod:
oc get pods -l app=egress-router-cni
$ oc get pods -l app=egress-router-cni
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE egress-router-cni-deployment-575465c75c-qkq6m 1/1 Running 0 18m
NAME READY STATUS RESTARTS AGE egress-router-cni-deployment-575465c75c-qkq6m 1/1 Running 0 18m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - View the logs and the routing table for the egress router pod.
Get the node name for the egress router pod:
POD_NODENAME=$(oc get pod -l app=egress-router-cni -o jsonpath="{.items[0].spec.nodeName}")
$ POD_NODENAME=$(oc get pod -l app=egress-router-cni -o jsonpath="{.items[0].spec.nodeName}")
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter into a debug session on the target node. This step instantiates a debug pod called
<node_name>-debug
:oc debug node/$POD_NODENAME
$ oc debug node/$POD_NODENAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set
/host
as the root directory within the debug shell. The debug pod mounts the root file system of the host in/host
within the pod. By changing the root directory to/host
, you can run binaries from the executable paths of the host:chroot /host
# chroot /host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow From within the
chroot
environment console, display the egress router logs:cat /tmp/egress-router-log
# cat /tmp/egress-router-log
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The logging file location and logging level are not configurable when you start the egress router by creating an
EgressRouter
object as described in this procedure.From within the
chroot
environment console, get the container ID:crictl ps --name egress-router-cni-pod | awk '{print $1}'
# crictl ps --name egress-router-cni-pod | awk '{print $1}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
CONTAINER bac9fae69ddb6
CONTAINER bac9fae69ddb6
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Determine the process ID of the container. In this example, the container ID is
bac9fae69ddb6
:crictl inspect -o yaml bac9fae69ddb6 | grep 'pid:' | awk '{print $2}'
# crictl inspect -o yaml bac9fae69ddb6 | grep 'pid:' | awk '{print $2}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
68857
68857
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the network namespace of the container:
nsenter -n -t 68857
# nsenter -n -t 68857
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Display the routing table:
ip route
# ip route
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the following example output, the
net1
network interface is the default route. Traffic for the cluster network uses theeth0
network interface. Traffic for the192.168.12.0/24
network uses thenet1
network interface and originates from the reserved source IP address192.168.12.99
. The pod routes all other traffic to the gateway at IP address192.168.12.1
. Routing for the service network is not shown.Example output
default via 192.168.12.1 dev net1 10.128.10.0/23 dev eth0 proto kernel scope link src 10.128.10.18 192.168.12.0/24 dev net1 proto kernel scope link src 192.168.12.99 192.168.12.1 dev net1
default via 192.168.12.1 dev net1 10.128.10.0/23 dev eth0 proto kernel scope link src 10.128.10.18 192.168.12.0/24 dev net1 proto kernel scope link src 192.168.12.99 192.168.12.1 dev net1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
23.18. Enabling multicast for a project Link kopierenLink in die Zwischenablage kopiert!
23.18.1. About multicast Link kopierenLink in die Zwischenablage kopiert!
With IP multicast, data is broadcast to many IP addresses simultaneously.
- At this time, multicast is best used for low-bandwidth coordination or service discovery and not a high-bandwidth solution.
-
By default, network policies affect all connections in a namespace. However, multicast is unaffected by network policies. If multicast is enabled in the same namespace as your network policies, it is always allowed, even if there is a
deny-all
network policy. Cluster administrators should consider the implications to the exemption of multicast from network policies before enabling it.
Multicast traffic between OpenShift Container Platform pods is disabled by default. If you are using the OVN-Kubernetes network plugin, you can enable multicast on a per-project basis.
23.18.2. Enabling multicast between pods Link kopierenLink in die Zwischenablage kopiert!
You can enable multicast between pods for your project.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
You must log in to the cluster with a user that has the
cluster-admin
role.
Procedure
Run the following command to enable multicast for a project. Replace
<namespace>
with the namespace for the project you want to enable multicast for.oc annotate namespace <namespace> \ k8s.ovn.org/multicast-enabled=true
$ oc annotate namespace <namespace> \ k8s.ovn.org/multicast-enabled=true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipYou can alternatively apply the following YAML to add the annotation:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To verify that multicast is enabled for a project, complete the following procedure:
Change your current project to the project that you enabled multicast for. Replace
<project>
with the project name.oc project <project>
$ oc project <project>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a pod to act as a multicast receiver:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a pod to act as a multicast sender:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In a new terminal window or tab, start the multicast listener.
Get the IP address for the Pod:
POD_IP=$(oc get pods mlistener -o jsonpath='{.status.podIP}')
$ POD_IP=$(oc get pods mlistener -o jsonpath='{.status.podIP}')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the multicast listener by entering the following command:
oc exec mlistener -i -t -- \ socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:$POD_IP,fork EXEC:hostname
$ oc exec mlistener -i -t -- \ socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:$POD_IP,fork EXEC:hostname
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Start the multicast transmitter.
Get the pod network IP address range:
CIDR=$(oc get Network.config.openshift.io cluster \ -o jsonpath='{.status.clusterNetwork[0].cidr}')
$ CIDR=$(oc get Network.config.openshift.io cluster \ -o jsonpath='{.status.clusterNetwork[0].cidr}')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To send a multicast message, enter the following command:
oc exec msender -i -t -- \ /bin/bash -c "echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=$CIDR,ip-multicast-ttl=64"
$ oc exec msender -i -t -- \ /bin/bash -c "echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=$CIDR,ip-multicast-ttl=64"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If multicast is working, the previous command returns the following output:
mlistener
mlistener
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
23.19. Disabling multicast for a project Link kopierenLink in die Zwischenablage kopiert!
23.19.1. Disabling multicast between pods Link kopierenLink in die Zwischenablage kopiert!
You can disable multicast between pods for your project.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
You must log in to the cluster with a user that has the
cluster-admin
role.
Procedure
Disable multicast by running the following command:
oc annotate namespace <namespace> \ k8s.ovn.org/multicast-enabled-
$ oc annotate namespace <namespace> \
1 k8s.ovn.org/multicast-enabled-
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
namespace
for the project you want to disable multicast for.
TipYou can alternatively apply the following YAML to delete the annotation:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
23.20. Tracking network flows Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can collect information about pod network flows from your cluster to assist with the following areas:
- Monitor ingress and egress traffic on the pod network.
- Troubleshoot performance issues.
- Gather data for capacity planning and security audits.
When you enable the collection of the network flows, only the metadata about the traffic is collected. For example, packet data is not collected, but the protocol, source address, destination address, port numbers, number of bytes, and other packet-level information is collected.
The data is collected in one or more of the following record formats:
- NetFlow
- sFlow
- IPFIX
When you configure the Cluster Network Operator (CNO) with one or more collector IP addresses and port numbers, the Operator configures Open vSwitch (OVS) on each node to send the network flows records to each collector.
You can configure the Operator to send records to more than one type of network flow collector. For example, you can send records to NetFlow collectors and also send records to sFlow collectors.
When OVS sends data to the collectors, each type of collector receives identical records. For example, if you configure two NetFlow collectors, OVS on a node sends identical records to the two collectors. If you also configure two sFlow collectors, the two sFlow collectors receive identical records. However, each collector type has a unique record format.
Collecting the network flows data and sending the records to collectors affects performance. Nodes process packets at a slower rate. If the performance impact is too great, you can delete the destinations for collectors to disable collecting network flows data and restore performance.
Enabling network flow collectors might have an impact on the overall performance of the cluster network.
23.20.1. Network object configuration for tracking network flows Link kopierenLink in die Zwischenablage kopiert!
The fields for configuring network flows collectors in the Cluster Network Operator (CNO) are shown in the following table:
Field | Type | Description |
---|---|---|
|
|
The name of the CNO object. This name is always |
|
|
One or more of |
|
| A list of IP address and network port pairs for up to 10 collectors. |
|
| A list of IP address and network port pairs for up to 10 collectors. |
|
| A list of IP address and network port pairs for up to 10 collectors. |
After applying the following manifest to the CNO, the Operator configures Open vSwitch (OVS) on each node in the cluster to send network flows records to the NetFlow collector that is listening at 192.168.1.99:2056
.
Example configuration for tracking network flows
23.20.2. Adding destinations for network flows collectors Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can configure the Cluster Network Operator (CNO) to send network flows metadata about the pod network to a network flows collector.
Prerequisites
-
You installed the OpenShift CLI (
oc
). -
You are logged in to the cluster with a user with
cluster-admin
privileges. - You have a network flows collector and know the IP address and port that it listens on.
Procedure
Create a patch file that specifies the network flows collector type and the IP address and port information of the collectors:
spec: exportNetworkFlows: netFlow: collectors: - 192.168.1.99:2056
spec: exportNetworkFlows: netFlow: collectors: - 192.168.1.99:2056
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the CNO with the network flows collectors:
oc patch network.operator cluster --type merge -p "$(cat <file_name>.yaml)"
$ oc patch network.operator cluster --type merge -p "$(cat <file_name>.yaml)"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
network.operator.openshift.io/cluster patched
network.operator.openshift.io/cluster patched
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verification is not typically necessary. You can run the following command to confirm that Open vSwitch (OVS) on each node is configured to send network flows records to one or more collectors.
View the Operator configuration to confirm that the
exportNetworkFlows
field is configured:oc get network.operator cluster -o jsonpath="{.spec.exportNetworkFlows}"
$ oc get network.operator cluster -o jsonpath="{.spec.exportNetworkFlows}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
{"netFlow":{"collectors":["192.168.1.99:2056"]}}
{"netFlow":{"collectors":["192.168.1.99:2056"]}}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the network flows configuration in OVS from each node:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
23.20.3. Deleting all destinations for network flows collectors Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can configure the Cluster Network Operator (CNO) to stop sending network flows metadata to a network flows collector.
Prerequisites
-
You installed the OpenShift CLI (
oc
). -
You are logged in to the cluster with a user with
cluster-admin
privileges.
Procedure
Remove all network flows collectors:
oc patch network.operator cluster --type='json' \ -p='[{"op":"remove", "path":"/spec/exportNetworkFlows"}]'
$ oc patch network.operator cluster --type='json' \ -p='[{"op":"remove", "path":"/spec/exportNetworkFlows"}]'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
network.operator.openshift.io/cluster patched
network.operator.openshift.io/cluster patched
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
23.21. Configuring hybrid networking Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can configure the Red Hat OpenShift Networking OVN-Kubernetes network plugin to allow Linux and Windows nodes to host Linux and Windows workloads, respectively.
23.21.1. Configuring hybrid networking with OVN-Kubernetes Link kopierenLink in die Zwischenablage kopiert!
You can configure your cluster to use hybrid networking with OVN-Kubernetes. This allows a hybrid cluster that supports different node networking configurations. For example, this is necessary to run both Linux and Windows nodes in a cluster.
You must configure hybrid networking with OVN-Kubernetes during the installation of your cluster. You cannot switch to hybrid networking after the installation process.
Prerequisites
-
You defined
OVNKubernetes
for thenetworking.networkType
parameter in theinstall-config.yaml
file. See the installation documentation for configuring OpenShift Container Platform network customizations on your chosen cloud provider for more information.
Procedure
Change to the directory that contains the installation program and create the manifests:
./openshift-install create manifests --dir <installation_directory>
$ ./openshift-install create manifests --dir <installation_directory>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<installation_directory>
-
Specifies the name of the directory that contains the
install-config.yaml
file for your cluster.
Create a stub manifest file for the advanced network configuration that is named
cluster-network-03-config.yml
in the<installation_directory>/manifests/
directory:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<installation_directory>
-
Specifies the directory name that contains the
manifests/
directory for your cluster.
Open the
cluster-network-03-config.yml
file in an editor and configure OVN-Kubernetes with hybrid networking, such as in the following example:Specify a hybrid networking configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the CIDR configuration used for nodes on the additional overlay network. The
hybridClusterNetwork
CIDR must not overlap with theclusterNetwork
CIDR. - 2
- Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default
4789
port. For more information on this requirement, see the Microsoft documentation on Pod-to-pod connectivity between hosts is broken.
NoteWindows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 is not supported on clusters with a custom
hybridOverlayVXLANPort
value because this Windows server version does not support selecting a custom VXLAN port.-
Save the
cluster-network-03-config.yml
file and quit the text editor. -
Optional: Back up the
manifests/cluster-network-03-config.yml
file. The installation program deletes themanifests/
directory when creating the cluster.
Complete any further installation configurations, and then create your cluster. Hybrid networking is enabled when the installation process is finished.