Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 24. OVN-Kubernetes network plugin
24.1. About the OVN-Kubernetes network plugin Link kopierenLink in die Zwischenablage kopiert!
The OpenShift Container Platform cluster uses a virtualized network for pod and service networks.
Part of Red Hat OpenShift Networking, the OVN-Kubernetes network plugin is the default network provider for OpenShift Container Platform. OVN-Kubernetes is based on Open Virtual Network (OVN) and provides an overlay-based networking implementation. A cluster that uses the OVN-Kubernetes plugin also runs Open vSwitch (OVS) on each node. OVN configures OVS on each node to implement the declared network configuration.
OVN-Kubernetes is the default networking solution for OpenShift Container Platform and single-node OpenShift deployments.
OVN-Kubernetes, which arose from the OVS project, uses many of the same constructs, such as open flow rules, to decide how packets travel through the network. For more information, see the Open Virtual Network website.
OVN-Kubernetes is a series of daemons for OVS that transform virtual network configurations into
OpenFlow
OpenFlow
OVN-Kubernetes provides more of the advanced functionality not available with
OpenFlow
OVN-Kubernetes runs a daemon on each node. There are daemon sets for the databases and for the OVN controller that run on every node. The OVN controller programs the Open vSwitch daemon on the nodes to support the following network provider features:
- Egress IPs
- Firewalls
- Hardware offloading
- Hybrid networking
- Internet Protocol Security (IPsec) encryption
- IPv6
- Multicast.
- Network policy and network policy logs
- Routers
24.1.1. OVN-Kubernetes purpose Link kopierenLink in die Zwischenablage kopiert!
The OVN-Kubernetes network plugin is an open-source, fully-featured Kubernetes CNI plugin that uses Open Virtual Network (OVN) to manage network traffic flows. OVN is a community developed, vendor-agnostic network virtualization solution. The OVN-Kubernetes network plugin:
- Uses OVN (Open Virtual Network) to manage network traffic flows. OVN is a community developed, vendor-agnostic network virtualization solution.
- Implements Kubernetes network policy support, including ingress and egress rules.
- Uses the Geneve (Generic Network Virtualization Encapsulation) protocol rather than VXLAN to create an overlay network between nodes.
The OVN-Kubernetes network plugin provides the following advantages over OpenShift SDN.
- Full support for IPv6 single-stack and IPv4/IPv6 dual-stack networking on supported platforms
- Support for hybrid clusters with both Linux and Microsoft Windows workloads
- Optional IPsec encryption of intra-cluster communications
- Offload of network data processing from host CPU to compatible network cards and data processing units (DPUs)
24.1.2. Supported network plugin feature matrix Link kopierenLink in die Zwischenablage kopiert!
Red Hat OpenShift Networking offers two options for the network plugin, OpenShift SDN and OVN-Kubernetes, for the network plugin. The following table summarizes the current feature support for both network plugins:
| Feature | OpenShift SDN | OVN-Kubernetes |
|---|---|---|
| Egress IPs | Supported | Supported |
| Egress firewall | Supported | Supported [1] |
| Egress router | Supported | Supported [2] |
| Hybrid networking | Not supported | Supported |
| IPsec encryption for intra-cluster communication | Not supported | Supported |
| IPv4 single-stack | Supported | Supported |
| IPv6 single-stack | Not supported | Supported [3] |
| IPv4/IPv6 dual-stack | Not Supported | Supported [4] |
| IPv6/IPv4 dual-stack | Not supported | Supported [5] |
| Kubernetes network policy | Supported | Supported |
| Kubernetes network policy logs | Not supported | Supported |
| Hardware offloading | Not supported | Supported |
| Multicast | Supported | Supported |
- Egress firewall is also known as egress network policy in OpenShift SDN. This is not the same as network policy egress.
- Egress router for OVN-Kubernetes supports only redirect mode.
- IPv6 single-stack networking on a bare-metal platform.
- IPv4/IPv6 dual-stack networking on bare-metal, VMware vSphere (installer-provisioned infrastructure installations only), IBM Power®, IBM Z®, and RHOSP platforms. Dual-stack networking on RHOSP is a Technology Preview feature.
- IPv6/IPv4 dual-stack networking on bare-metal, VMware vSphere (installer-provisioned infrastructure installations only), and IBM Power® platforms.
24.1.3. OVN-Kubernetes IPv6 and dual-stack limitations Link kopierenLink in die Zwischenablage kopiert!
The OVN-Kubernetes network plugin has the following limitations:
For clusters configured for dual-stack networking, both IPv4 and IPv6 traffic must use the same network interface as the default gateway.
If this requirement is not met, pods on the host in the
daemon set enter theovnkube-nodestate.CrashLoopBackOffIf you display a pod with a command such as
, theoc get pod -n openshift-ovn-kubernetes -l app=ovnkube-node -o yamlfield has more than one message about the default gateway, as shown in the following output:statusI1006 16:09:50.985852 60651 helper_linux.go:73] Found default gateway interface br-ex 192.168.127.1 I1006 16:09:50.985923 60651 helper_linux.go:73] Found default gateway interface ens4 fe80::5054:ff:febe:bcd4 F1006 16:09:50.985939 60651 ovnkube.go:130] multiple gateway interfaces detected: br-ex ens4The only resolution is to reconfigure the host networking so that both IP families use the same network interface for the default gateway.
For clusters configured for dual-stack networking, both the IPv4 and IPv6 routing tables must contain the default gateway.
If this requirement is not met, pods on the host in the
daemon set enter theovnkube-nodestate.CrashLoopBackOffIf you display a pod with a command such as
, theoc get pod -n openshift-ovn-kubernetes -l app=ovnkube-node -o yamlfield has more than one message about the default gateway, as shown in the following output:statusI0512 19:07:17.589083 108432 helper_linux.go:74] Found default gateway interface br-ex 192.168.123.1 F0512 19:07:17.589141 108432 ovnkube.go:133] failed to get default gateway interfaceThe only resolution is to reconfigure the host networking so that both IP families contain the default gateway.
-
If you set the parameter to
ipv6.disablein the1section of thekernelArgumentcustom resource (CR) for your cluster, OVN-Kubernetes pods enter aMachineConfigstate. Additionally, updating your cluster to a later version of OpenShift Container Platform fails because the Network Operator remains on aCrashLoopBackOffstate. Red Hat does not support disabling IPv6 addresses for your cluster so do not set theDegradedparameter toipv6.disable.1
24.1.4. Session affinity Link kopierenLink in die Zwischenablage kopiert!
Session affinity is a feature that applies to Kubernetes
Service
24.1.4.1. Stickiness timeout for session affinity Link kopierenLink in die Zwischenablage kopiert!
The OVN-Kubernetes network plugin for OpenShift Container Platform calculates the stickiness timeout for a session from a client based on the last packet. For example, if you run a
curl
timeoutSeconds parameter.
24.2. OVN-Kubernetes architecture Link kopierenLink in die Zwischenablage kopiert!
24.2.1. Introduction to OVN-Kubernetes architecture Link kopierenLink in die Zwischenablage kopiert!
The following diagram shows the OVN-Kubernetes architecture.
Figure 24.1. OVK-Kubernetes architecture
The key components are:
- Cloud Management System (CMS) - A platform specific client for OVN that provides a CMS specific plugin for OVN integration. The plugin translates the cloud management system’s concept of the logical network configuration, stored in the CMS configuration database in a CMS-specific format, into an intermediate representation understood by OVN.
-
OVN Northbound database (
nbdb) container - Stores the logical network configuration passed by the CMS plugin. -
OVN Southbound database (
sbdb) container - Stores the physical and logical network configuration state for Open vSwitch (OVS) system on each node, including tables that bind them. -
OVN north daemon (
ovn-northd) - This is the intermediary client betweencontainer andnbdbcontainer. It translates the logical network configuration in terms of conventional network concepts, taken from thesbdbcontainer, into logical data path flows in thenbdbcontainer. The container name forsbdbdaemon isovn-northdand it runs in thenorthdpods.ovnkube-node -
ovn-controller - This is the OVN agent that interacts with OVS and hypervisors, for any information or update that is needed for container. The
sbdbreads logical flows from theovn-controllercontainer, translates them intosbdbflows and sends them to the node’s OVS daemon. The container name isOpenFlowand it runs in theovn-controllerpods.ovnkube-node
The OVN northd, northbound database, and southbound database run on each node in the cluster and mostly contain and process information that is local to that node.
The OVN northbound database has the logical network configuration passed down to it by the cloud management system (CMS). The OVN northbound database contains the current desired state of the network, presented as a collection of logical ports, logical switches, logical routers, and more. The
ovn-northd
northd
The OVN southbound database has physical and logical representations of the network and binding tables that link them together. It contains the chassis information of the node and other constructs like remote transit switch ports that are required to connect to the other nodes in the cluster. The OVN southbound database also contains all the logic flows. The logic flows are shared with the
ovn-controller
ovn-controller
OpenFlow
Open vSwitch
The Kubernetes control plane nodes contain two
ovnkube-control-plane
ovnkube-control-plane
24.2.2. Listing all resources in the OVN-Kubernetes project Link kopierenLink in die Zwischenablage kopiert!
Finding the resources and containers that run in the OVN-Kubernetes project is important to help you understand the OVN-Kubernetes networking implementation.
Prerequisites
-
Access to the cluster as a user with the role.
cluster-admin -
The OpenShift CLI () installed.
oc
Procedure
Run the following command to get all resources, endpoints, and
in the OVN-Kubernetes project:ConfigMaps$ oc get all,ep,cm -n openshift-ovn-kubernetesExample output
Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ NAME READY STATUS RESTARTS AGE pod/ovnkube-control-plane-65c6f55656-6d55h 2/2 Running 0 114m pod/ovnkube-control-plane-65c6f55656-fd7vw 2/2 Running 2 (104m ago) 114m pod/ovnkube-node-bcvts 8/8 Running 0 113m pod/ovnkube-node-drgvv 8/8 Running 0 113m pod/ovnkube-node-f2pxt 8/8 Running 0 113m pod/ovnkube-node-frqsb 8/8 Running 0 105m pod/ovnkube-node-lbxkk 8/8 Running 0 105m pod/ovnkube-node-tt7bx 8/8 Running 1 (102m ago) 105m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/ovn-kubernetes-control-plane ClusterIP None <none> 9108/TCP 114m service/ovn-kubernetes-node ClusterIP None <none> 9103/TCP,9105/TCP 114m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/ovnkube-node 6 6 6 6 6 beta.kubernetes.io/os=linux 114m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/ovnkube-control-plane 3/3 3 3 114m NAME DESIRED CURRENT READY AGE replicaset.apps/ovnkube-control-plane-65c6f55656 3 3 3 114m NAME ENDPOINTS AGE endpoints/ovn-kubernetes-control-plane 10.0.0.3:9108,10.0.0.4:9108,10.0.0.5:9108 114m endpoints/ovn-kubernetes-node 10.0.0.3:9105,10.0.0.4:9105,10.0.0.5:9105 + 9 more... 114m NAME DATA AGE configmap/control-plane-status 1 113m configmap/kube-root-ca.crt 1 114m configmap/openshift-service-ca.crt 1 114m configmap/ovn-ca 1 114m configmap/ovnkube-config 1 114m configmap/signer-ca 1 114mThere is one
pod for each node in the cluster. Theovnkube-nodeconfig map has the OpenShift Container Platform OVN-Kubernetes configurations.ovnkube-configList all of the containers in the
pods by running the following command:ovnkube-node$ oc get pods ovnkube-node-bcvts -o jsonpath='{.spec.containers[*].name}' -n openshift-ovn-kubernetesExpected output
ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controllerThe
pod is made up of several containers. It is responsible for hosting the northbound database (ovnkube-nodecontainer), the southbound database (nbdbcontainer), the north daemon (sbdbcontainer),northdand theovn-controllercontainer. Theovnkube-controllercontainer watches for API objects like pods, egress IPs, namespaces, services, endpoints, egress firewall, and network policies. It is also responsible for allocating pod IP from the available subnet pool for that node.ovnkube-controllerList all the containers in the
pods by running the following command:ovnkube-control-plane$ oc get pods ovnkube-control-plane-65c6f55656-6d55h -o jsonpath='{.spec.containers[*].name}' -n openshift-ovn-kubernetesExpected output
kube-rbac-proxy ovnkube-cluster-managerThe
pod has a container (ovnkube-control-plane) that resides on each OpenShift Container Platform node. Theovnkube-cluster-managercontainer allocates pod subnet, transit switch subnet IP and join switch subnet IP to each node in the cluster. Theovnkube-cluster-managercontainer monitors metrics for thekube-rbac-proxycontainer.ovnkube-cluster-manager
24.2.3. Listing the OVN-Kubernetes northbound database contents Link kopierenLink in die Zwischenablage kopiert!
Each node is controlled by the
ovnkube-controller
ovnkube-node
ovnkube-node
Prerequisites
-
Access to the cluster as a user with the role.
cluster-admin -
The OpenShift CLI () installed.
oc
To run ovn
nbctl
sbctl
nbdb
sbdb
List pods by running the following command:
$ oc get po -n openshift-ovn-kubernetesExample output
NAME READY STATUS RESTARTS AGE ovnkube-control-plane-8444dff7f9-4lh9k 2/2 Running 0 27m ovnkube-control-plane-8444dff7f9-5rjh9 2/2 Running 0 27m ovnkube-node-55xs2 8/8 Running 0 26m ovnkube-node-7r84r 8/8 Running 0 16m ovnkube-node-bqq8p 8/8 Running 0 17m ovnkube-node-mkj4f 8/8 Running 0 26m ovnkube-node-mlr8k 8/8 Running 0 26m ovnkube-node-wqn2m 8/8 Running 0 16mOptional: To list the pods with node information, run the following command:
$ oc get pods -n openshift-ovn-kubernetes -owideExample output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ovnkube-control-plane-8444dff7f9-4lh9k 2/2 Running 0 27m 10.0.0.3 ci-ln-t487nnb-72292-mdcnq-master-1 <none> <none> ovnkube-control-plane-8444dff7f9-5rjh9 2/2 Running 0 27m 10.0.0.4 ci-ln-t487nnb-72292-mdcnq-master-2 <none> <none> ovnkube-node-55xs2 8/8 Running 0 26m 10.0.0.4 ci-ln-t487nnb-72292-mdcnq-master-2 <none> <none> ovnkube-node-7r84r 8/8 Running 0 17m 10.0.128.3 ci-ln-t487nnb-72292-mdcnq-worker-b-wbz7z <none> <none> ovnkube-node-bqq8p 8/8 Running 0 17m 10.0.128.2 ci-ln-t487nnb-72292-mdcnq-worker-a-lh7ms <none> <none> ovnkube-node-mkj4f 8/8 Running 0 27m 10.0.0.5 ci-ln-t487nnb-72292-mdcnq-master-0 <none> <none> ovnkube-node-mlr8k 8/8 Running 0 27m 10.0.0.3 ci-ln-t487nnb-72292-mdcnq-master-1 <none> <none> ovnkube-node-wqn2m 8/8 Running 0 17m 10.0.128.4 ci-ln-t487nnb-72292-mdcnq-worker-c-przlm <none> <none>Navigate into a pod to look at the northbound database by running the following command:
$ oc rsh -c nbdb -n openshift-ovn-kubernetes ovnkube-node-55xs2Run the following command to show all the objects in the northbound database:
$ ovn-nbctl showThe output is too long to list here. The list includes the NAT rules, logical switches, load balancers and so on.
You can narrow down and focus on specific components by using some of the following optional commands:
Run the following command to show the list of logical routers:
$ oc exec -n openshift-ovn-kubernetes -it ovnkube-node-55xs2 \ -c northd -- ovn-nbctl lr-listExample output
45339f4f-7d0b-41d0-b5f9-9fca9ce40ce6 (GR_ci-ln-t487nnb-72292-mdcnq-master-2) 96a0a0f0-e7ed-4fec-8393-3195563de1b8 (ovn_cluster_router)NoteFrom this output you can see there is router on each node plus an
.ovn_cluster_routerRun the following command to show the list of logical switches:
$ oc exec -n openshift-ovn-kubernetes -it ovnkube-node-55xs2 \ -c nbdb -- ovn-nbctl ls-listExample output
bdd7dc3d-d848-4a74-b293-cc15128ea614 (ci-ln-t487nnb-72292-mdcnq-master-2) b349292d-ee03-4914-935f-1940b6cb91e5 (ext_ci-ln-t487nnb-72292-mdcnq-master-2) 0aac0754-ea32-4e33-b086-35eeabf0a140 (join) 992509d7-2c3f-4432-88db-c179e43592e5 (transit_switch)NoteFrom this output you can see there is an ext switch for each node plus switches with the node name itself and a join switch.
Run the following command to show the list of load balancers:
$ oc exec -n openshift-ovn-kubernetes -it ovnkube-node-55xs2 \ -c nbdb -- ovn-nbctl lb-listExample output
UUID LB PROTO VIP IPs 7c84c673-ed2a-4436-9a1f-9bc5dd181eea Service_default/ tcp 172.30.0.1:443 10.0.0.3:6443,169.254.169.2:6443,10.0.0.5:6443 4d663fd9-ddc8-4271-b333-4c0e279e20bb Service_default/ tcp 172.30.0.1:443 10.0.0.3:6443,10.0.0.4:6443,10.0.0.5:6443 292eb07f-b82f-4962-868a-4f541d250bca Service_openshif tcp 172.30.105.247:443 10.129.0.12:8443 034b5a7f-bb6a-45e9-8e6d-573a82dc5ee3 Service_openshif tcp 172.30.192.38:443 10.0.0.3:10259,10.0.0.4:10259,10.0.0.5:10259 a68bb53e-be84-48df-bd38-bdd82fcd4026 Service_openshif tcp 172.30.161.125:8443 10.129.0.32:8443 6cc21b3d-2c54-4c94-8ff5-d8e017269c2e Service_openshif tcp 172.30.3.144:443 10.129.0.22:8443 37996ffd-7268-4862-a27f-61cd62e09c32 Service_openshif tcp 172.30.181.107:443 10.129.0.18:8443 81d4da3c-f811-411f-ae0c-bc6713d0861d Service_openshif tcp 172.30.228.23:443 10.129.0.29:8443 ac5a4f3b-b6ba-4ceb-82d0-d84f2c41306e Service_openshif tcp 172.30.14.240:9443 10.129.0.36:9443 c88979fb-1ef5-414b-90ac-43b579351ac9 Service_openshif tcp 172.30.231.192:9001 10.128.0.5:9001,10.128.2.5:9001,10.129.0.5:9001,10.129.2.4:9001,10.130.0.3:9001,10.131.0.3:9001 fcb0a3fb-4a77-4230-a84a-be45dce757e8 Service_openshif tcp 172.30.189.92:443 10.130.0.17:8440 67ef3e7b-ceb9-4bf0-8d96-b43bde4c9151 Service_openshif tcp 172.30.67.218:443 10.129.0.9:8443 d0032fba-7d5e-424a-af25-4ab9b5d46e81 Service_openshif tcp 172.30.102.137:2379 10.0.0.3:2379,10.0.0.4:2379,10.0.0.5:2379 tcp 172.30.102.137:9979 10.0.0.3:9979,10.0.0.4:9979,10.0.0.5:9979 7361c537-3eec-4e6c-bc0c-0522d182abd4 Service_openshif tcp 172.30.198.215:9001 10.0.0.3:9001,10.0.0.4:9001,10.0.0.5:9001,10.0.128.2:9001,10.0.128.3:9001,10.0.128.4:9001 0296c437-1259-410b-a6fd-81c310ad0af5 Service_openshif tcp 172.30.198.215:9001 10.0.0.3:9001,169.254.169.2:9001,10.0.0.5:9001,10.0.128.2:9001,10.0.128.3:9001,10.0.128.4:9001 5d5679f5-45b8-479d-9f7c-08b123c688b8 Service_openshif tcp 172.30.38.253:17698 10.128.0.52:17698,10.129.0.84:17698,10.130.0.60:17698 2adcbab4-d1c9-447d-9573-b5dc9f2efbfa Service_openshif tcp 172.30.148.52:443 10.0.0.4:9202,10.0.0.5:9202 tcp 172.30.148.52:444 10.0.0.4:9203,10.0.0.5:9203 tcp 172.30.148.52:445 10.0.0.4:9204,10.0.0.5:9204 tcp 172.30.148.52:446 10.0.0.4:9205,10.0.0.5:9205 2a33a6d7-af1b-4892-87cc-326a380b809b Service_openshif tcp 172.30.67.219:9091 10.129.2.16:9091,10.131.0.16:9091 tcp 172.30.67.219:9092 10.129.2.16:9092,10.131.0.16:9092 tcp 172.30.67.219:9093 10.129.2.16:9093,10.131.0.16:9093 tcp 172.30.67.219:9094 10.129.2.16:9094,10.131.0.16:9094 f56f59d7-231a-4974-99b3-792e2741ec8d Service_openshif tcp 172.30.89.212:443 10.128.0.41:8443,10.129.0.68:8443,10.130.0.44:8443 08c2c6d7-d217-4b96-b5d8-c80c4e258116 Service_openshif tcp 172.30.102.137:2379 10.0.0.3:2379,169.254.169.2:2379,10.0.0.5:2379 tcp 172.30.102.137:9979 10.0.0.3:9979,169.254.169.2:9979,10.0.0.5:9979 60a69c56-fc6a-4de6-bd88-3f2af5ba5665 Service_openshif tcp 172.30.10.193:443 10.129.0.25:8443 ab1ef694-0826-4671-a22c-565fc2d282ec Service_openshif tcp 172.30.196.123:443 10.128.0.33:8443,10.129.0.64:8443,10.130.0.37:8443 b1fb34d3-0944-4770-9ee3-2683e7a630e2 Service_openshif tcp 172.30.158.93:8443 10.129.0.13:8443 95811c11-56e2-4877-be1e-c78ccb3a82a9 Service_openshif tcp 172.30.46.85:9001 10.130.0.16:9001 4baba1d1-b873-4535-884c-3f6fc07a50fd Service_openshif tcp 172.30.28.87:443 10.129.0.26:8443 6c2e1c90-f0ca-484e-8a8e-40e71442110a Service_openshif udp 172.30.0.10:53 10.128.0.13:5353,10.128.2.6:5353,10.129.0.39:5353,10.129.2.6:5353,10.130.0.11:5353,10.131.0.9:5353NoteFrom this truncated output you can see there are many OVN-Kubernetes load balancers. Load balancers in OVN-Kubernetes are representations of services.
Run the following command to display the options available with the command
:ovn-nbctl$ oc exec -n openshift-ovn-kubernetes -it ovnkube-node-55xs2 \ -c nbdb ovn-nbctl --help
24.2.4. Command-line arguments for ovn-nbctl to examine northbound database contents Link kopierenLink in die Zwischenablage kopiert!
The following table describes the command-line arguments that can be used with
ovn-nbctl
Open a remote shell in the pod you want to view the contents of and then run the
ovn-nbctl
| Argument | Description |
|---|---|
|
| An overview of the northbound database contents as seen from a specific node. |
|
| Show the details associated with the specified switch or router. |
|
| Show the logical routers. |
|
| Using the router information from
|
|
| Show network address translation details for the specified router. |
|
| Show the logical switches |
|
| Using the switch information from
|
|
| Get the type for the logical port. |
|
| Show the load balancers. |
24.2.5. Listing the OVN-Kubernetes southbound database contents Link kopierenLink in die Zwischenablage kopiert!
Each node is controlled by the
ovnkube-controller
ovnkube-node
ovnkube-node
Prerequisites
-
Access to the cluster as a user with the role.
cluster-admin -
The OpenShift CLI () installed.
oc
To run ovn
nbctl
sbctl
nbdb
sbdb
List the pods by running the following command:
$ oc get po -n openshift-ovn-kubernetesExample output
NAME READY STATUS RESTARTS AGE ovnkube-control-plane-8444dff7f9-4lh9k 2/2 Running 0 27m ovnkube-control-plane-8444dff7f9-5rjh9 2/2 Running 0 27m ovnkube-node-55xs2 8/8 Running 0 26m ovnkube-node-7r84r 8/8 Running 0 16m ovnkube-node-bqq8p 8/8 Running 0 17m ovnkube-node-mkj4f 8/8 Running 0 26m ovnkube-node-mlr8k 8/8 Running 0 26m ovnkube-node-wqn2m 8/8 Running 0 16mOptional: To list the pods with node information, run the following command:
$ oc get pods -n openshift-ovn-kubernetes -owideExample output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ovnkube-control-plane-8444dff7f9-4lh9k 2/2 Running 0 27m 10.0.0.3 ci-ln-t487nnb-72292-mdcnq-master-1 <none> <none> ovnkube-control-plane-8444dff7f9-5rjh9 2/2 Running 0 27m 10.0.0.4 ci-ln-t487nnb-72292-mdcnq-master-2 <none> <none> ovnkube-node-55xs2 8/8 Running 0 26m 10.0.0.4 ci-ln-t487nnb-72292-mdcnq-master-2 <none> <none> ovnkube-node-7r84r 8/8 Running 0 17m 10.0.128.3 ci-ln-t487nnb-72292-mdcnq-worker-b-wbz7z <none> <none> ovnkube-node-bqq8p 8/8 Running 0 17m 10.0.128.2 ci-ln-t487nnb-72292-mdcnq-worker-a-lh7ms <none> <none> ovnkube-node-mkj4f 8/8 Running 0 27m 10.0.0.5 ci-ln-t487nnb-72292-mdcnq-master-0 <none> <none> ovnkube-node-mlr8k 8/8 Running 0 27m 10.0.0.3 ci-ln-t487nnb-72292-mdcnq-master-1 <none> <none> ovnkube-node-wqn2m 8/8 Running 0 17m 10.0.128.4 ci-ln-t487nnb-72292-mdcnq-worker-c-przlm <none> <none>Navigate into a pod to look at the southbound database:
$ oc rsh -c sbdb -n openshift-ovn-kubernetes ovnkube-node-55xs2Run the following command to show all the objects in the southbound database:
$ ovn-sbctl showExample output
Chassis "5db31703-35e9-413b-8cdf-69e7eecb41f7" hostname: ci-ln-9gp362t-72292-v2p94-worker-a-8bmwz Encap geneve ip: "10.0.128.4" options: {csum="true"} Port_Binding tstor-ci-ln-9gp362t-72292-v2p94-worker-a-8bmwz Chassis "070debed-99b7-4bce-b17d-17e720b7f8bc" hostname: ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Encap geneve ip: "10.0.128.2" options: {csum="true"} Port_Binding k8s-ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding rtoe-GR_ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding openshift-monitoring_alertmanager-main-1 Port_Binding rtoj-GR_ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding etor-GR_ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding cr-rtos-ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding openshift-e2e-loki_loki-promtail-qcrcz Port_Binding jtor-GR_ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding openshift-multus_network-metrics-daemon-mkd4t Port_Binding openshift-ingress-canary_ingress-canary-xtvj4 Port_Binding openshift-ingress_router-default-6c76cbc498-pvlqk Port_Binding openshift-dns_dns-default-zz582 Port_Binding openshift-monitoring_thanos-querier-57585899f5-lbf4f Port_Binding openshift-network-diagnostics_network-check-target-tn228 Port_Binding openshift-monitoring_prometheus-k8s-0 Port_Binding openshift-image-registry_image-registry-68899bd877-xqxjj Chassis "179ba069-0af1-401c-b044-e5ba90f60fea" hostname: ci-ln-9gp362t-72292-v2p94-master-0 Encap geneve ip: "10.0.0.5" options: {csum="true"} Port_Binding tstor-ci-ln-9gp362t-72292-v2p94-master-0 Chassis "68c954f2-5a76-47be-9e84-1cb13bd9dab9" hostname: ci-ln-9gp362t-72292-v2p94-worker-c-mjf9w Encap geneve ip: "10.0.128.3" options: {csum="true"} Port_Binding tstor-ci-ln-9gp362t-72292-v2p94-worker-c-mjf9w Chassis "2de65d9e-9abf-4b6e-a51d-a1e038b4d8af" hostname: ci-ln-9gp362t-72292-v2p94-master-2 Encap geneve ip: "10.0.0.4" options: {csum="true"} Port_Binding tstor-ci-ln-9gp362t-72292-v2p94-master-2 Chassis "1d371cb8-5e21-44fd-9025-c4b162cc4247" hostname: ci-ln-9gp362t-72292-v2p94-master-1 Encap geneve ip: "10.0.0.3" options: {csum="true"} Port_Binding tstor-ci-ln-9gp362t-72292-v2p94-master-1This detailed output shows the chassis and the ports that are attached to the chassis which in this case are all of the router ports and anything that runs like host networking. Any pods communicate out to the wider network using source network address translation (SNAT). Their IP address is translated into the IP address of the node that the pod is running on and then sent out into the network.
In addition to the chassis information the southbound database has all the logic flows and those logic flows are then sent to the
running on each of the nodes. Theovn-controllertranslates the logic flows into open flow rules and ultimately programsovn-controllerso that your pods can then follow open flow rules and make it out of the network.OpenvSwitchRun the following command to display the options available with the command
:ovn-sbctl$ oc exec -n openshift-ovn-kubernetes -it ovnkube-node-55xs2 \ -c sbdb ovn-sbctl --help
24.2.6. Command-line arguments for ovn-sbctl to examine southbound database contents Link kopierenLink in die Zwischenablage kopiert!
The following table describes the command-line arguments that can be used with
ovn-sbctl
Open a remote shell in the pod you wish to view the contents of and then run the
ovn-sbctl
| Argument | Description |
|---|---|
|
| An overview of the southbound database contents as seen from a specific node. |
|
| List the contents of southbound database for a the specified port . |
|
| List the logical flows. |
24.2.7. OVN-Kubernetes logical architecture Link kopierenLink in die Zwischenablage kopiert!
OVN is a network virtualization solution. It creates logical switches and routers. These switches and routers are interconnected to create any network topologies. When you run
ovnkube-trace
Figure 24.2. OVN-Kubernetes router and switch components
The key components involved in packet processing are:
- Gateway routers
-
Gateway routers sometimes called L3 gateway routers, are typically used between the distributed routers and the physical network. Gateway routers including their logical patch ports are bound to a physical location (not distributed), or chassis. The patch ports on this router are known as l3gateway ports in the ovn-southbound database (
ovn-sbdb). - Distributed logical routers
- Distributed logical routers and the logical switches behind them, to which virtual machines and containers attach, effectively reside on each hypervisor.
- Join local switch
- Join local switches are used to connect the distributed router and gateway routers. It reduces the number of IP addresses needed on the distributed router.
- Logical switches with patch ports
- Logical switches with patch ports are used to virtualize the network stack. They connect remote logical ports through tunnels.
- Logical switches with localnet ports
- Logical switches with localnet ports are used to connect OVN to the physical network. They connect remote logical ports by bridging the packets to directly connected physical L2 segments using localnet ports.
- Patch ports
- Patch ports represent connectivity between logical switches and logical routers and between peer logical routers. A single connection has a pair of patch ports at each such point of connectivity, one on each side.
- l3gateway ports
-
l3gateway ports are the port binding entries in the
ovn-sbdbfor logical patch ports used in the gateway routers. They are called l3gateway ports rather than patch ports just to portray the fact that these ports are bound to a chassis just like the gateway router itself. - localnet ports
-
localnet ports are present on the bridged logical switches that allows a connection to a locally accessible network from each
ovn-controllerinstance. This helps model the direct connectivity to the physical network from the logical switches. A logical switch can only have a single localnet port attached to it.
24.2.7.1. Installing network-tools on local host Link kopierenLink in die Zwischenablage kopiert!
Install
network-tools
Procedure
Clone the
repository onto your workstation with the following command:network-tools$ git clone git@github.com:openshift/network-tools.gitChange into the directory for the repository you just cloned:
$ cd network-toolsOptional: List all available commands:
$ ./debug-scripts/network-tools -h
24.2.7.2. Running network-tools Link kopierenLink in die Zwischenablage kopiert!
Get information about the logical switches and routers by running
network-tools
Prerequisites
-
You installed the OpenShift CLI ().
oc -
You are logged in to the cluster as a user with privileges.
cluster-admin -
You have installed on local host.
network-tools
Procedure
List the routers by running the following command:
$ ./debug-scripts/network-tools ovn-db-run-command ovn-nbctl lr-listExample output
944a7b53-7948-4ad2-a494-82b55eeccf87 (GR_ci-ln-54932yb-72292-kd676-worker-c-rzj99) 84bd4a4c-4b0b-4a47-b0cf-a2c32709fc53 (ovn_cluster_router)List the localnet ports by running the following command:
$ ./debug-scripts/network-tools ovn-db-run-command \ ovn-sbctl find Port_Binding type=localnetExample output
_uuid : d05298f5-805b-4838-9224-1211afc2f199 additional_chassis : [] additional_encap : [] chassis : [] datapath : f3c2c959-743b-4037-854d-26627902597c encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : br-ex_ci-ln-54932yb-72292-kd676-worker-c-rzj99 mac : [unknown] mirror_rules : [] nat_addresses : [] options : {network_name=physnet} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 2 type : localnet up : false virtual_parent : [] [...]List the
ports by running the following command:l3gateway$ ./debug-scripts/network-tools ovn-db-run-command \ ovn-sbctl find Port_Binding type=l3gatewayExample output
_uuid : 5207a1f3-1cf3-42f1-83e9-387bbb06b03c additional_chassis : [] additional_encap : [] chassis : ca6eb600-3a10-4372-a83e-e0d957c4cd92 datapath : f3c2c959-743b-4037-854d-26627902597c encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : etor-GR_ci-ln-54932yb-72292-kd676-worker-c-rzj99 mac : ["42:01:0a:00:80:04"] mirror_rules : [] nat_addresses : ["42:01:0a:00:80:04 10.0.128.4"] options : {l3gateway-chassis="84737c36-b383-4c83-92c5-2bd5b3c7e772", peer=rtoe-GR_ci-ln-54932yb-72292-kd676-worker-c-rzj99} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 1 type : l3gateway up : true virtual_parent : [] _uuid : 6088d647-84f2-43f2-b53f-c9d379042679 additional_chassis : [] additional_encap : [] chassis : ca6eb600-3a10-4372-a83e-e0d957c4cd92 datapath : dc9cea00-d94a-41b8-bdb0-89d42d13aa2e encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : jtor-GR_ci-ln-54932yb-72292-kd676-worker-c-rzj99 mac : [router] mirror_rules : [] nat_addresses : [] options : {l3gateway-chassis="84737c36-b383-4c83-92c5-2bd5b3c7e772", peer=rtoj-GR_ci-ln-54932yb-72292-kd676-worker-c-rzj99} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 2 type : l3gateway up : true virtual_parent : [] [...]List the patch ports by running the following command:
$ ./debug-scripts/network-tools ovn-db-run-command \ ovn-sbctl find Port_Binding type=patchExample output
_uuid : 785fb8b6-ee5a-4792-a415-5b1cb855dac2 additional_chassis : [] additional_encap : [] chassis : [] datapath : f1ddd1cc-dc0d-43b4-90ca-12651305acec encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : stor-ci-ln-54932yb-72292-kd676-worker-c-rzj99 mac : [router] mirror_rules : [] nat_addresses : ["0a:58:0a:80:02:01 10.128.2.1 is_chassis_resident(\"cr-rtos-ci-ln-54932yb-72292-kd676-worker-c-rzj99\")"] options : {peer=rtos-ci-ln-54932yb-72292-kd676-worker-c-rzj99} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 1 type : patch up : false virtual_parent : [] _uuid : c01ff587-21a5-40b4-8244-4cd0425e5d9a additional_chassis : [] additional_encap : [] chassis : [] datapath : f6795586-bf92-4f84-9222-efe4ac6a7734 encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : rtoj-ovn_cluster_router mac : ["0a:58:64:40:00:01 100.64.0.1/16"] mirror_rules : [] nat_addresses : [] options : {peer=jtor-ovn_cluster_router} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 1 type : patch up : false virtual_parent : [] [...]
24.3. Troubleshooting OVN-Kubernetes Link kopierenLink in die Zwischenablage kopiert!
OVN-Kubernetes has many sources of built-in health checks and logs. Follow the instructions in these sections to examine your cluster. If a support case is necessary, follow the support guide to collect additional information through a
must-gather
-- gather_network_logs
24.3.1. Monitoring OVN-Kubernetes health by using readiness probes Link kopierenLink in die Zwischenablage kopiert!
The
ovnkube-control-plane
ovnkube-node
Prerequisites
-
Access to the OpenShift CLI ().
oc -
You have access to the cluster with privileges.
cluster-admin -
You have installed .
jq
Procedure
Review the details of the
readiness probe by running the following command:ovnkube-node$ oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node \ -o json | jq '.items[0].spec.containers[] | .name,.readinessProbe'The readiness probe for the northbound and southbound database containers in the
pod checks for the health of the databases and theovnkube-nodecontainer.ovnkube-controllerThe
container in theovnkube-controllerpod has a readiness probe to verify the presence of the OVN-Kubernetes CNI configuration file, the absence of which would indicate that the pod is not running or is not ready to accept requests to configure pods.ovnkube-nodeShow all events including the probe failures, for the namespace by using the following command:
$ oc get events -n openshift-ovn-kubernetesShow the events for just a specific pod:
$ oc describe pod ovnkube-node-9lqfk -n openshift-ovn-kubernetesShow the messages and statuses from the cluster network operator:
$ oc get co/network -o json | jq '.status.conditions[]'Show the
status of each container inreadypods by running the following script:ovnkube-node$ for p in $(oc get pods --selector app=ovnkube-node -n openshift-ovn-kubernetes \ -o jsonpath='{range.items[*]}{" "}{.metadata.name}'); do echo === $p ===; \ oc get pods -n openshift-ovn-kubernetes $p -o json | jq '.status.containerStatuses[] | .name, .ready'; \ doneNoteThe expectation is all container statuses are reporting as
. Failure of a readiness probe sets the status totrue.false
24.3.2. Viewing OVN-Kubernetes alerts in the console Link kopierenLink in die Zwischenablage kopiert!
The Alerting UI provides detailed information about alerts and their governing alerting rules and silences.
Prerequisites
- You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing metrics for.
Procedure (UI)
-
In the Administrator perspective, select Observe
Alerting. The three main pages in the Alerting UI in this perspective are the Alerts, Silences, and Alerting Rules pages. -
View the rules for OVN-Kubernetes alerts by selecting Observe
Alerting Alerting Rules.
24.3.3. Viewing OVN-Kubernetes alerts in the CLI Link kopierenLink in die Zwischenablage kopiert!
You can get information about alerts and their governing alerting rules and silences from the command line.
Prerequisites
-
Access to the cluster as a user with the role.
cluster-admin -
The OpenShift CLI () installed.
oc -
You have installed .
jq
Procedure
View active or firing alerts by running the following commands.
Set the alert manager route environment variable by running the following command:
$ ALERT_MANAGER=$(oc get route alertmanager-main -n openshift-monitoring \ -o jsonpath='{@.spec.host}')Issue a
request to the alert manager route API by running the following command, replacingcurlwith the URL of your$ALERT_MANAGERinstance:Alertmanager$ curl -s -k -H "Authorization: Bearer $(oc create token prometheus-k8s -n openshift-monitoring)" https://$ALERT_MANAGER/api/v1/alerts | jq '.data[] | "\(.labels.severity) \(.labels.alertname) \(.labels.pod) \(.labels.container) \(.labels.endpoint) \(.labels.instance)"'
View alerting rules by running the following command:
$ oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -s 'http://localhost:9090/api/v1/rules' | jq '.data.groups[].rules[] | select(((.name|contains("ovn")) or (.name|contains("OVN")) or (.name|contains("Ovn")) or (.name|contains("North")) or (.name|contains("South"))) and .type=="alerting")'
24.3.4. Viewing the OVN-Kubernetes logs using the CLI Link kopierenLink in die Zwischenablage kopiert!
You can view the logs for each of the pods in the
ovnkube-master
ovnkube-node
oc
Prerequisites
-
Access to the cluster as a user with the role.
cluster-admin -
Access to the OpenShift CLI ().
oc -
You have installed .
jq
Procedure
View the log for a specific pod:
$ oc logs -f <pod_name> -c <container_name> -n <namespace>where:
-f- Optional: Specifies that the output follows what is being written into the logs.
<pod_name>- Specifies the name of the pod.
<container_name>- Optional: Specifies the name of a container. When a pod has more than one container, you must specify the container name.
<namespace>- Specify the namespace the pod is running in.
For example:
$ oc logs ovnkube-node-5dx44 -n openshift-ovn-kubernetes$ oc logs -f ovnkube-node-5dx44 -c ovnkube-controller -n openshift-ovn-kubernetesThe contents of log files are printed out.
Examine the most recent entries in all the containers in the
pods:ovnkube-node$ for p in $(oc get pods --selector app=ovnkube-node -n openshift-ovn-kubernetes \ -o jsonpath='{range.items[*]}{" "}{.metadata.name}'); \ do echo === $p ===; for container in $(oc get pods -n openshift-ovn-kubernetes $p \ -o json | jq -r '.status.containerStatuses[] | .name');do echo ---$container---; \ oc logs -c $container $p -n openshift-ovn-kubernetes --tail=5; done; doneView the last 5 lines of every log in every container in an
pod using the following command:ovnkube-node$ oc logs -l app=ovnkube-node -n openshift-ovn-kubernetes --all-containers --tail 5
24.3.5. Viewing the OVN-Kubernetes logs using the web console Link kopierenLink in die Zwischenablage kopiert!
You can view the logs for each of the pods in the
ovnkube-master
ovnkube-node
Prerequisites
-
Access to the OpenShift CLI ().
oc
Procedure
-
In the OpenShift Container Platform console, navigate to Workloads
Pods or navigate to the pod through the resource you want to investigate. -
Select the project from the drop-down menu.
openshift-ovn-kubernetes - Click the name of the pod you want to investigate.
-
Click Logs. By default for the the logs associated with the
ovnkube-mastercontainer are displayed.northd - Use the down-down menu to select logs for each container in turn.
24.3.5.1. Changing the OVN-Kubernetes log levels Link kopierenLink in die Zwischenablage kopiert!
The default log level for OVN-Kubernetes is 4. To debug OVN-Kubernetes, set the log level to 5. Follow this procedure to increase the log level of the OVN-Kubernetes to help you debug an issue.
Prerequisites
-
You have access to the cluster with privileges.
cluster-admin - You have access to the OpenShift Container Platform web console.
Procedure
Run the following command to get detailed information for all pods in the OVN-Kubernetes project:
$ oc get po -o wide -n openshift-ovn-kubernetesExample output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ovnkube-control-plane-65497d4548-9ptdr 2/2 Running 2 (128m ago) 147m 10.0.0.3 ci-ln-3njdr9b-72292-5nwkp-master-0 <none> <none> ovnkube-control-plane-65497d4548-j6zfk 2/2 Running 0 147m 10.0.0.5 ci-ln-3njdr9b-72292-5nwkp-master-2 <none> <none> ovnkube-node-5dx44 8/8 Running 0 146m 10.0.0.3 ci-ln-3njdr9b-72292-5nwkp-master-0 <none> <none> ovnkube-node-dpfn4 8/8 Running 0 146m 10.0.0.4 ci-ln-3njdr9b-72292-5nwkp-master-1 <none> <none> ovnkube-node-kwc9l 8/8 Running 0 134m 10.0.128.2 ci-ln-3njdr9b-72292-5nwkp-worker-a-2fjcj <none> <none> ovnkube-node-mcrhl 8/8 Running 0 134m 10.0.128.4 ci-ln-3njdr9b-72292-5nwkp-worker-c-v9x5v <none> <none> ovnkube-node-nsct4 8/8 Running 0 146m 10.0.0.5 ci-ln-3njdr9b-72292-5nwkp-master-2 <none> <none> ovnkube-node-zrj9f 8/8 Running 0 134m 10.0.128.3 ci-ln-3njdr9b-72292-5nwkp-worker-b-v78h7 <none> <none>Create a
file similar to the following example and use a filename such asConfigMap:env-overrides.yamlExample
ConfigMapfilekind: ConfigMap apiVersion: v1 metadata: name: env-overrides namespace: openshift-ovn-kubernetes data: ci-ln-3njdr9b-72292-5nwkp-master-0: |1 # This sets the log level for the ovn-kubernetes node process: OVN_KUBE_LOG_LEVEL=5 # You might also/instead want to enable debug logging for ovn-controller: OVN_LOG_LEVEL=dbg ci-ln-3njdr9b-72292-5nwkp-master-2: | # This sets the log level for the ovn-kubernetes node process: OVN_KUBE_LOG_LEVEL=5 # You might also/instead want to enable debug logging for ovn-controller: OVN_LOG_LEVEL=dbg _master: |2 # This sets the log level for the ovn-kubernetes master process as well as the ovn-dbchecker: OVN_KUBE_LOG_LEVEL=5 # You might also/instead want to enable debug logging for northd, nbdb and sbdb on all masters: OVN_LOG_LEVEL=dbgApply the
file by using the following command:ConfigMap$ oc apply -n openshift-ovn-kubernetes -f env-overrides.yamlExample output
configmap/env-overrides.yaml createdRestart the
pods to apply the new log level by using the following commands:ovnkube$ oc delete pod -n openshift-ovn-kubernetes \ --field-selector spec.nodeName=ci-ln-3njdr9b-72292-5nwkp-master-0 -l app=ovnkube-node$ oc delete pod -n openshift-ovn-kubernetes \ --field-selector spec.nodeName=ci-ln-3njdr9b-72292-5nwkp-master-2 -l app=ovnkube-node$ oc delete pod -n openshift-ovn-kubernetes -l app=ovnkube-nodeTo verify that the `ConfigMap`file has been applied to all nodes for a specific pod, run the following command:
$ oc logs -n openshift-ovn-kubernetes --all-containers --prefix ovnkube-node-<xxxx> | grep -E -m 10 '(Logging config:|vconsole|DBG)'where:
<XXXX>Specifies the random sequence of letters for a pod from the previous step.
Example output
[pod/ovnkube-node-2cpjc/sbdb] + exec /usr/share/ovn/scripts/ovn-ctl --no-monitor '--ovn-sb-log=-vconsole:info -vfile:off -vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m' run_sb_ovsdb [pod/ovnkube-node-2cpjc/ovnkube-controller] I1012 14:39:59.984506 35767 config.go:2247] Logging config: {File: CNIFile:/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log LibovsdbFile:/var/log/ovnkube/libovsdb.log Level:5 LogFileMaxSize:100 LogFileMaxBackups:5 LogFileMaxAge:0 ACLLoggingRateLimit:20} [pod/ovnkube-node-2cpjc/northd] + exec ovn-northd --no-chdir -vconsole:info -vfile:off '-vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m' --pidfile /var/run/ovn/ovn-northd.pid --n-threads=1 [pod/ovnkube-node-2cpjc/nbdb] + exec /usr/share/ovn/scripts/ovn-ctl --no-monitor '--ovn-nb-log=-vconsole:info -vfile:off -vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m' run_nb_ovsdb [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.552Z|00002|hmap|DBG|lib/shash.c:114: 1 bucket with 6+ nodes, including 1 bucket with 6 nodes (32 nodes total across 32 buckets) [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.553Z|00003|hmap|DBG|lib/shash.c:114: 1 bucket with 6+ nodes, including 1 bucket with 6 nodes (64 nodes total across 64 buckets) [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.553Z|00004|hmap|DBG|lib/shash.c:114: 1 bucket with 6+ nodes, including 1 bucket with 7 nodes (32 nodes total across 32 buckets) [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.553Z|00005|reconnect|DBG|unix:/var/run/openvswitch/db.sock: entering BACKOFF [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.553Z|00007|reconnect|DBG|unix:/var/run/openvswitch/db.sock: entering CONNECTING [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.553Z|00008|ovsdb_cs|DBG|unix:/var/run/openvswitch/db.sock: SERVER_SCHEMA_REQUESTED -> SERVER_SCHEMA_REQUESTED at lib/ovsdb-cs.c:423
Optional: Check the
file has been applied by running the following command:ConfigMapfor f in $(oc -n openshift-ovn-kubernetes get po -l 'app=ovnkube-node' --no-headers -o custom-columns=N:.metadata.name) ; do echo "---- $f ----" ; oc -n openshift-ovn-kubernetes exec -c ovnkube-controller $f -- pgrep -a -f init-ovnkube-controller | grep -P -o '^.*loglevel\s+\d' ; doneExample output
---- ovnkube-node-2dt57 ---- 60981 /usr/bin/ovnkube --init-ovnkube-controller xpst8-worker-c-vmh5n.c.openshift-qe.internal --init-node xpst8-worker-c-vmh5n.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 4 ---- ovnkube-node-4zznh ---- 178034 /usr/bin/ovnkube --init-ovnkube-controller xpst8-master-2.c.openshift-qe.internal --init-node xpst8-master-2.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 4 ---- ovnkube-node-548sx ---- 77499 /usr/bin/ovnkube --init-ovnkube-controller xpst8-worker-a-fjtnb.c.openshift-qe.internal --init-node xpst8-worker-a-fjtnb.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 4 ---- ovnkube-node-6btrf ---- 73781 /usr/bin/ovnkube --init-ovnkube-controller xpst8-worker-b-p8rww.c.openshift-qe.internal --init-node xpst8-worker-b-p8rww.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 4 ---- ovnkube-node-fkc9r ---- 130707 /usr/bin/ovnkube --init-ovnkube-controller xpst8-master-0.c.openshift-qe.internal --init-node xpst8-master-0.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 5 ---- ovnkube-node-tk9l4 ---- 181328 /usr/bin/ovnkube --init-ovnkube-controller xpst8-master-1.c.openshift-qe.internal --init-node xpst8-master-1.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 4
24.3.6. Checking the OVN-Kubernetes pod network connectivity Link kopierenLink in die Zwischenablage kopiert!
The connectivity check controller, in OpenShift Container Platform 4.10 and later, orchestrates connection verification checks in your cluster. These include Kubernetes API, OpenShift API and individual nodes. The results for the connection tests are stored in
PodNetworkConnectivity
openshift-network-diagnostics
Prerequisites
-
Access to the OpenShift CLI ().
oc -
Access to the cluster as a user with the role.
cluster-admin -
You have installed .
jq
Procedure
To list the current
objects, enter the following command:PodNetworkConnectivityCheck$ oc get podnetworkconnectivitychecks -n openshift-network-diagnosticsView the most recent success for each connection object by using the following command:
$ oc get podnetworkconnectivitychecks -n openshift-network-diagnostics \ -o json | jq '.items[]| .spec.targetEndpoint,.status.successes[0]'View the most recent failures for each connection object by using the following command:
$ oc get podnetworkconnectivitychecks -n openshift-network-diagnostics \ -o json | jq '.items[]| .spec.targetEndpoint,.status.failures[0]'View the most recent outages for each connection object by using the following command:
$ oc get podnetworkconnectivitychecks -n openshift-network-diagnostics \ -o json | jq '.items[]| .spec.targetEndpoint,.status.outages[0]'The connectivity check controller also logs metrics from these checks into Prometheus.
View all the metrics by running the following command:
$ oc exec prometheus-k8s-0 -n openshift-monitoring -- \ promtool query instant http://localhost:9090 \ '{component="openshift-network-diagnostics"}'View the latency between the source pod and the openshift api service for the last 5 minutes:
$ oc exec prometheus-k8s-0 -n openshift-monitoring -- \ promtool query instant http://localhost:9090 \ '{component="openshift-network-diagnostics"}'
24.4. OVN-Kubernetes network policy Link kopierenLink in die Zwischenablage kopiert!
The
AdminNetworkPolicy
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Kubernetes offers two features that users can use to enforce network security. One feature that allows users to enforce network policy is the
NetworkPolicy
The second feature is
AdminNetworkPolicy
AdminNetworkPolicy
BaselineAdminNetworkPolicy
NetworkPolicy
NetworkPolicy
OVN-Kubernetes CNI in OpenShift Container Platform implements these network policies using Access Control List (ACLs) Tiers to evaluate and apply them. ACLs are evaluated in descending order from Tier 1 to Tier 3.
Tier 1 evaluates
AdminNetworkPolicy
NetworkPolicy
BaselineAdminNetworkPolicy
Figure 24.3. OVK-Kubernetes Access Control List (ACL)
If traffic matches an ANP rule, the rules in that ANP will be evaluated first. If the match is an ANP
allow
deny
NetworkPolicies
BaselineAdminNetworkPolicy
pass
NetworkPolicy
24.4.1. AdminNetworkPolicy Link kopierenLink in die Zwischenablage kopiert!
An
AdminNetworkPolicy
NetworkPolicy
The key difference between
AdminNetworkPolicy
NetworkPolicy
An ANP allows administrators to specify the following:
-
A value that determines the order of its evaluation. The lower the value the higher the precedence.
priority -
A that consists of a set of namespaces or namespace..
subject -
A list of ingress rules to be applied for all ingress traffic towards the .
subject -
A list of egress rules to be applied for all egress traffic from the .
subject
The
AdminNetworkPolicy
TechnologyPreviewNoUpgrade
TechnologyPreviewNoUpgrade
24.4.1.1. AdminNetworkPolicy example Link kopierenLink in die Zwischenablage kopiert!
Example 24.1. Example YAML file for an ANP
apiVersion: policy.networking.k8s.io/v1alpha1
kind: AdminNetworkPolicy
metadata:
name: sample-anp-deny-pass-rules
spec:
priority: 50
subject:
namespaces:
matchLabels:
kubernetes.io/metadata.name: example.name
ingress:
- name: "deny-all-ingress-tenant-1"
action: "Deny"
from:
- pods:
namespaces:
namespaceSelector:
matchLabels:
custom-anp: tenant-1
podSelector:
matchLabels:
custom-anp: tenant-1
egress:
- name: "pass-all-egress-to-tenant-1"
action: "Pass"
to:
- pods:
namespaces:
namespaceSelector:
matchLabels:
custom-anp: tenant-1
podSelector:
matchLabels:
custom-anp: tenant-1
- 1
- Specify a name for your ANP.
- 2
- The
spec.priorityfield supports a maximum of 100 ANP in the values of 0-99 in a cluster. The lower the value the higher the precedence. CreatingAdminNetworkPolicywith the same priority creates a nondeterministic outcome. - 3
- Specify the namespace to apply the ANP resource.
- 4
- ANP have both ingress and egress rules. ANP rules for
spec.ingressfield accepts values ofPass,Deny, andAllowfor theactionfield. - 5
- Specify a name for the
ingress.name. - 6
- Specify the namespaces to select the pods from to apply the ANP resource.
- 7
- Specify
podSelector.matchLabelsname of the pods to apply the ANP resource. - 8
- ANP have both ingress and egress rules. ANP rules for
spec.egressfield accepts values ofPass,Deny, andAllowfor theactionfield.
Additional resources
24.4.1.2. AdminNetworkPolicy actions for rules Link kopierenLink in die Zwischenablage kopiert!
As an administrator, you can set
Allow
Deny
Pass
action
AdminNetworkPolicy
24.4.1.2.1. AdminNetworkPolicy Allow example Link kopierenLink in die Zwischenablage kopiert!
The following ANP that is defined at priority 9 ensures all ingress traffic is allowed from the
monitoring
Example 24.2. Example YAML file for a strong Allow ANP
apiVersion: policy.networking.k8s.io/v1alpha1
kind: AdminNetworkPolicy
metadata:
name: allow-monitoring
spec:
priority: 9
subject:
namespaces: {}
ingress:
- name: "allow-ingress-from-monitoring"
action: "Allow"
from:
- namespaces:
namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
# ...
This is an example of a strong
Allow
NetworkPolicy
24.4.1.2.2. AdminNetworkPolicy Deny example Link kopierenLink in die Zwischenablage kopiert!
The following ANP that is defined at priority 5 ensures all ingress traffic from the
monitoring
security: restricted
Example 24.3. Example YAML file for a strong Deny ANP
apiVersion: policy.networking.k8s.io/v1alpha1
kind: AdminNetworkPolicy
metadata:
name: block-monitoring
spec:
priority: 5
subject:
namespaces:
matchLabels:
security: restricted
ingress:
- name: "deny-ingress-from-monitoring"
action: "Deny"
from:
- namespaces:
namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
# ...
This is a strong
Deny
When combined with the strong
Allow
block-monitoring
24.4.1.2.3. AdminNetworkPolicy Pass example Link kopierenLink in die Zwischenablage kopiert!
TThe following ANP that is defined at priority 7 ensures all ingress traffic from the
monitoring
security: internal
NetworkPolicy
Example 24.4. Example YAML file for a strong Pass ANP
apiVersion: policy.networking.k8s.io/v1alpha1
kind: AdminNetworkPolicy
metadata:
name: pass-monitoring
spec:
priority: 7
subject:
namespaces:
matchLabels:
security: internal
ingress:
- name: "pass-ingress-from-monitoring"
action: "Pass"
from:
- namespaces:
namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
# ...
This example is a strong
Pass
NetworkPolicy
pass-monitoring
internal
NetworkPolicy
24.4.2. BaselineAdminNetworkPolicy Link kopierenLink in die Zwischenablage kopiert!
BaselineAdminNetworkPolicy
NetworkPolicy
allow
deny
The
BaselineAdminNetworkPolicy
NetworkPolicy
NetworkPolicy
default
A BANP allows administrators to specify:
-
A that consists of a set of namespaces or namespace.
subject -
A list of ingress rules to be applied for all ingress traffic towards the .
subject -
A list of egress rules to be applied for all egress traffic from the .
subject
BaselineAdminNetworkPolicy
TechnologyPreviewNoUpgrade
24.4.2.1. BaselineAdminNetworkPolicy example Link kopierenLink in die Zwischenablage kopiert!
Example 24.5. Example YAML file for BANP
apiVersion: policy.networking.k8s.io/v1alpha1
kind: BaselineAdminNetworkPolicy
metadata:
name: default
spec:
subject:
namespaces:
matchLabels:
kubernetes.io/metadata.name: example.name
ingress:
- name: "deny-all-ingress-from-tenant-1"
action: "Deny"
from:
- pods:
namespaces:
namespaceSelector:
matchLabels:
custom-banp: tenant-1
podSelector:
matchLabels:
custom-banp: tenant-1
egress:
- name: "allow-all-egress-to-tenant-1"
action: "Allow"
to:
- pods:
namespaces:
namespaceSelector:
matchLabels:
custom-banp: tenant-1
podSelector:
matchLabels:
custom-banp: tenant-1
- 1
- The policy name must be
defaultbecause BANP is a singleton object. - 2
- Specify the namespace to apply the ANP to.
- 3
- BANP have both ingress and egress rules. BANP rules for
spec.ingressandspec.egressfields accepts values ofDenyandAllowfor theactionfield. - 4
- Specify a name for the
ingress.name - 5
- Specify the namespaces to select the pods from to apply the BANP resource.
- 6
- Specify
podSelector.matchLabelsname of the pods to apply the BANP resource.
24.4.2.2. BaselineAdminNetworkPolicy Deny example Link kopierenLink in die Zwischenablage kopiert!
The following BANP singleton ensures that the administrator has set up a default deny policy for all ingress monitoring traffic coming into the tenants at
internal
pass-monitoring
Example 24.6. Example YAML file for a guardrail Deny rule
apiVersion: policy.networking.k8s.io/v1alpha1
kind: BaselineAdminNetworkPolicy
metadata:
name: default
spec:
subject:
namespaces:
matchLabels:
security: internal
ingress:
- name: "deny-ingress-from-monitoring"
action: "Deny"
from:
- namespaces:
namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
# ...
You can use an
AdminNetworkPolicy
Pass
action
BaselineAdminNetworkPolicy
As an administrator, if you apply both the "AdminNetworkPolicy
Pass
Deny
NetworkPolicy
For example, Tenant 1 can set up the following
NetworkPolicy
Example 24.7. Example NetworkPolicy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-monitoring
namespace: tenant 1
spec:
podSelector:
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
# ...
In this scenario, Tenant 1’s policy would be evaluated after the "AdminNetworkPolicy
Pass
Deny
security
internal
NetworkPolicy
NetworkPolicy
NetworkPolicy
24.5. Tracing Openflow with ovnkube-trace Link kopierenLink in die Zwischenablage kopiert!
OVN and OVS traffic flows can be simulated in a single utility called
ovnkube-trace
ovnkube-trace
ovn-trace
ovs-appctl ofproto/trace
ovn-detrace
You can execute the
ovnkube-trace
24.5.1. Installing the ovnkube-trace on local host Link kopierenLink in die Zwischenablage kopiert!
The
ovnkube-trace
ovnkube-trace
Prerequisites
-
You installed the OpenShift CLI ().
oc -
You are logged in to the cluster with a user with privileges.
cluster-admin
Procedure
Create a pod variable by using the following command:
$ POD=$(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-control-plane -o name | head -1 | awk -F '/' '{print $NF}')Run the following command on your local host to copy the binary from the
pods:ovnkube-control-plane$ oc cp -n openshift-ovn-kubernetes $POD:/usr/bin/ovnkube-trace -c ovnkube-cluster-manager ovnkube-traceNoteIf you are using Red Hat Enterprise Linux (RHEL) 8 to run the
tool, you must copy the fileovnkube-traceto your local host./usr/lib/rhel8/ovnkube-traceMake
executable by running the following command:ovnkube-trace$ chmod +x ovnkube-traceDisplay the options available with
by running the following command:ovnkube-trace$ ./ovnkube-trace -helpExpected output
Usage of ./ovnkube-trace: -addr-family string Address family (ip4 or ip6) to be used for tracing (default "ip4") -dst string dest: destination pod name -dst-ip string destination IP address (meant for tests to external targets) -dst-namespace string k8s namespace of dest pod (default "default") -dst-port string dst-port: destination port (default "80") -kubeconfig string absolute path to the kubeconfig file -loglevel string loglevel: klog level (default "0") -ovn-config-namespace string namespace used by ovn-config itself -service string service: destination service name -skip-detrace skip ovn-detrace command -src string src: source pod name -src-namespace string k8s namespace of source pod (default "default") -tcp use tcp transport protocol -udp use udp transport protocolThe command-line arguments supported are familiar Kubernetes constructs, such as namespaces, pods, services so you do not need to find the MAC address, the IP address of the destination nodes, or the ICMP type.
The log levels are:
- 0 (minimal output)
- 2 (more verbose output showing results of trace commands)
- 5 (debug output)
24.5.2. Running ovnkube-trace Link kopierenLink in die Zwischenablage kopiert!
Run
ovn-trace
Prerequisites
-
You installed the OpenShift CLI ().
oc -
You are logged in to the cluster with a user with privileges.
cluster-admin -
You have installed on local host
ovnkube-trace
Example: Testing that DNS resolution works from a deployed pod
This example illustrates how to test the DNS resolution from a deployed pod to the core DNS pod that runs in the cluster.
Procedure
Start a web service in the default namespace by entering the following command:
$ oc run web --namespace=default --image=quay.io/openshifttest/nginx --labels="app=web" --expose --port=80List the pods running in the
namespace:openshift-dnsoc get pods -n openshift-dnsExample output
NAME READY STATUS RESTARTS AGE dns-default-8s42x 2/2 Running 0 5h8m dns-default-mdw6r 2/2 Running 0 4h58m dns-default-p8t5h 2/2 Running 0 4h58m dns-default-rl6nk 2/2 Running 0 5h8m dns-default-xbgqx 2/2 Running 0 5h8m dns-default-zv8f6 2/2 Running 0 4h58m node-resolver-62jjb 1/1 Running 0 5h8m node-resolver-8z4cj 1/1 Running 0 4h59m node-resolver-bq244 1/1 Running 0 5h8m node-resolver-hc58n 1/1 Running 0 4h59m node-resolver-lm6z4 1/1 Running 0 5h8m node-resolver-zfx5k 1/1 Running 0 5hRun the following
command to verify DNS resolution is working:ovnkube-trace$ ./ovnkube-trace \ -src-namespace default \1 -src web \2 -dst-namespace openshift-dns \3 -dst dns-default-p8t5h \4 -udp -dst-port 53 \5 -loglevel 06 Example output if the
src&dstpod lands on the same node:ovn-trace source pod to destination pod indicates success from web to dns-default-p8t5h ovn-trace destination pod to source pod indicates success from dns-default-p8t5h to web ovs-appctl ofproto/trace source pod to destination pod indicates success from web to dns-default-p8t5h ovs-appctl ofproto/trace destination pod to source pod indicates success from dns-default-p8t5h to web ovn-detrace source pod to destination pod indicates success from web to dns-default-p8t5h ovn-detrace destination pod to source pod indicates success from dns-default-p8t5h to webExample output if the
src&dstpod lands on a different node:ovn-trace source pod to destination pod indicates success from web to dns-default-8s42x ovn-trace (remote) source pod to destination pod indicates success from web to dns-default-8s42x ovn-trace destination pod to source pod indicates success from dns-default-8s42x to web ovn-trace (remote) destination pod to source pod indicates success from dns-default-8s42x to web ovs-appctl ofproto/trace source pod to destination pod indicates success from web to dns-default-8s42x ovs-appctl ofproto/trace destination pod to source pod indicates success from dns-default-8s42x to web ovn-detrace source pod to destination pod indicates success from web to dns-default-8s42x ovn-detrace destination pod to source pod indicates success from dns-default-8s42x to webThe ouput indicates success from the deployed pod to the DNS port and also indicates that it is successful going back in the other direction. So you know bi-directional traffic is supported on UDP port 53 if my web pod wants to do dns resolution from core DNS.
If for example that did not work and you wanted to get the
ovn-trace
ovs-appctl
proto/trace
ovn-detrace
$ ./ovnkube-trace \
-src-namespace default \
-src web \
-dst-namespace openshift-dns \
-dst dns-default-467qw \
-udp -dst-port 53 \
-loglevel 2
The output from this increased log level is too much to list here. In a failure situation the output of this command shows which flow is dropping that traffic. For example an egress or ingress network policy may be configured on the cluster that does not allow that traffic.
Example: Verifying by using debug output a configured default deny
This example illustrates how to identify by using the debug output that an ingress default deny policy blocks traffic.
Procedure
Create the following YAML that defines a
policy to deny ingress from all pods in all namespaces. Save the YAML in thedeny-by-defaultfile:deny-by-default.yamlkind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default namespace: default spec: podSelector: {} ingress: []Apply the policy by entering the following command:
$ oc apply -f deny-by-default.yamlExample output
networkpolicy.networking.k8s.io/deny-by-default createdStart a web service in the
namespace by entering the following command:default$ oc run web --namespace=default --image=quay.io/openshifttest/nginx --labels="app=web" --expose --port=80Run the following command to create the
namespace:prod$ oc create namespace prodRun the following command to label the
namespace:prod$ oc label namespace/prod purpose=productionRun the following command to deploy an
image in thealpinenamespace and start a shell:prod$ oc run test-6459 --namespace=prod --rm -i -t --image=alpine -- sh- Open another terminal session.
In this new terminal session run
to verify the failure in communication between the source podovn-tracerunning in namespacetest-6459and destination pod running in theprodnamespace:default$ ./ovnkube-trace \ -src-namespace prod \ -src test-6459 \ -dst-namespace default \ -dst web \ -tcp -dst-port 80 \ -loglevel 0Example output
ovn-trace source pod to destination pod indicates failure from test-6459 to webIncrease the log level to 2 to expose the reason for the failure by running the following command:
$ ./ovnkube-trace \ -src-namespace prod \ -src test-6459 \ -dst-namespace default \ -dst web \ -tcp -dst-port 80 \ -loglevel 2Example output
... ------------------------------------------------ 3. ls_out_acl_hint (northd.c:7454): !ct.new && ct.est && !ct.rpl && ct_mark.blocked == 0, priority 4, uuid 12efc456 reg0[8] = 1; reg0[10] = 1; next; 5. ls_out_acl_action (northd.c:7835): reg8[30..31] == 0, priority 500, uuid 69372c5d reg8[30..31] = 1; next(4); 5. ls_out_acl_action (northd.c:7835): reg8[30..31] == 1, priority 500, uuid 2fa0af89 reg8[30..31] = 2; next(4); 4. ls_out_acl_eval (northd.c:7691): reg8[30..31] == 2 && reg0[10] == 1 && (outport == @a16982411286042166782_ingressDefaultDeny), priority 2000, uuid 447d0dab reg8[17] = 1; ct_commit { ct_mark.blocked = 1; };1 next; ...- 1
- Ingress traffic is blocked due to the default deny policy being in place.
Create a policy that allows traffic from all pods in a particular namespaces with a label
. Save the YAML in thepurpose=productionfile:web-allow-prod.yamlkind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-prod namespace: default spec: podSelector: matchLabels: app: web policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: productionApply the policy by entering the following command:
$ oc apply -f web-allow-prod.yamlRun
to verify that traffic is now allowed by entering the following command:ovnkube-trace$ ./ovnkube-trace \ -src-namespace prod \ -src test-6459 \ -dst-namespace default \ -dst web \ -tcp -dst-port 80 \ -loglevel 0Expected output
ovn-trace source pod to destination pod indicates success from test-6459 to web ovn-trace destination pod to source pod indicates success from web to test-6459 ovs-appctl ofproto/trace source pod to destination pod indicates success from test-6459 to web ovs-appctl ofproto/trace destination pod to source pod indicates success from web to test-6459 ovn-detrace source pod to destination pod indicates success from test-6459 to web ovn-detrace destination pod to source pod indicates success from web to test-6459Run the following command in the shell that was opened in step six to connect nginx to the web-server:
wget -qO- --timeout=2 http://web.defaultExpected output
<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
24.6. Migrating from the OpenShift SDN network plugin Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can migrate to the OVN-Kubernetes network plugin from the OpenShift SDN network plugin.
You can use the offline migration method for migrating from the OpenShift SDN network plugin to the OVN-Kubernetes plugin. The offline migration method is a manual process that includes some downtime.
24.6.1. Migration to the OVN-Kubernetes network plugin Link kopierenLink in die Zwischenablage kopiert!
Migrating to the OVN-Kubernetes network plugin is a manual process that includes some downtime during which your cluster is unreachable.
Before you migrate your OpenShift Container Platform cluster to use the OVN-Kubernetes network plugin, update your cluster to the latest z-stream release so that all the latest bug fixes apply to your cluster.
Although a rollback procedure is provided, the migration is intended to be a one-way process.
A migration to the OVN-Kubernetes network plugin is supported on the following platforms:
- Bare metal hardware
- Amazon Web Services (AWS)
- Google Cloud
- IBM Cloud®
- Microsoft Azure
- Red Hat OpenStack Platform (RHOSP)
- VMware vSphere
Migrating to or from the OVN-Kubernetes network plugin is not supported for managed OpenShift cloud services such as Red Hat OpenShift Dedicated, Azure Red Hat OpenShift(ARO), and Red Hat OpenShift Service on AWS (ROSA).
Migrating from OpenShift SDN network plugin to OVN-Kubernetes network plugin is not supported on Nutanix.
24.6.1.1. Considerations for migrating to the OVN-Kubernetes network plugin Link kopierenLink in die Zwischenablage kopiert!
If you have more than 150 nodes in your OpenShift Container Platform cluster, then open a support case for consultation on your migration to the OVN-Kubernetes network plugin.
The subnets assigned to nodes and the IP addresses assigned to individual pods are not preserved during the migration.
While the OVN-Kubernetes network plugin implements many of the capabilities present in the OpenShift SDN network plugin, the configuration is not the same.
If your cluster uses any of the following OpenShift SDN network plugin capabilities, you must manually configure the same capability in the OVN-Kubernetes network plugin:
- Namespace isolation
- Egress router pods
-
Before migrating to OVN-Kubernetes, ensure that the following IP address ranges are not in use: ,
100.64.0.0/16,169.254.169.0/29,100.88.0.0/16,fd98::/64, andfd69::/125. OVN-Kubernetes uses these ranges internally. Do not include any of these ranges in any other CIDR definitions in your cluster or infrastructure.fd97::/64 -
If your cluster with Precision Time Protocol (PTP) uses the User Datagram Protocol (UDP) for hardware time stamping and you migrate to the OVN-Kubernetes plugin, the hardware time stamping cannot be applied to primary interface devices, such as an Open vSwitch (OVS) bridge. As a result, UDP version 4 configurations cannot work with a
openshift-sdninterface.br-ex
The following sections highlight the differences in configuration between the aforementioned capabilities in OVN-Kubernetes and OpenShift SDN network plugins.
24.6.1.1.1. Primary network interface Link kopierenLink in die Zwischenablage kopiert!
The OpenShift SDN plugin allows application of the
NodeNetworkConfigurationPolicy
If you have an NNCP applied to the primary interface, you must delete the NNCP before migrating to the OVN-Kubernetes network plugin. Deleting the NNCP does not remove the configuration from the primary interface, but with OVN-Kubernetes, the Kubernetes NMState cannot manage this configuration. Instead, the
configure-ovs.sh
24.6.1.1.2. Namespace isolation Link kopierenLink in die Zwischenablage kopiert!
OVN-Kubernetes supports only the network policy isolation mode.
For a cluster using OpenShift SDN that is configured in either the multitenant or subnet isolation mode, you can still migrate to the OVN-Kubernetes network plugin. Note that after the migration operation, multitenant isolation mode is dropped, so you must manually configure network policies to achieve the same level of project-level isolation for pods and services.
24.6.1.1.3. Egress IP addresses Link kopierenLink in die Zwischenablage kopiert!
OpenShift SDN supports two different Egress IP modes:
- In the automatically assigned approach, an egress IP address range is assigned to a node.
- In the manually assigned approach, a list of one or more egress IP addresses is assigned to a node.
The migration process supports migrating Egress IP configurations that use the automatically assigned mode.
The differences in configuring an egress IP address between OVN-Kubernetes and OpenShift SDN is described in the following table:
| OVN-Kubernetes | OpenShift SDN |
|---|---|
|
|
For more information on using egress IP addresses in OVN-Kubernetes, see "Configuring an egress IP address".
24.6.1.1.4. Egress network policies Link kopierenLink in die Zwischenablage kopiert!
The difference in configuring an egress network policy, also known as an egress firewall, between OVN-Kubernetes and OpenShift SDN is described in the following table:
| OVN-Kubernetes | OpenShift SDN |
|---|---|
|
|
Because the name of an
EgressFirewall
default
EgressNetworkPolicy
default
If you subsequently rollback to OpenShift SDN, all
EgressNetworkPolicy
default
For more information on using an egress firewall in OVN-Kubernetes, see "Configuring an egress firewall for a project".
24.6.1.1.5. Egress router pods Link kopierenLink in die Zwischenablage kopiert!
OVN-Kubernetes supports egress router pods in redirect mode. OVN-Kubernetes does not support egress router pods in HTTP proxy mode or DNS proxy mode.
When you deploy an egress router with the Cluster Network Operator, you cannot specify a node selector to control which node is used to host the egress router pod.
24.6.1.1.6. Multicast Link kopierenLink in die Zwischenablage kopiert!
The difference between enabling multicast traffic on OVN-Kubernetes and OpenShift SDN is described in the following table:
| OVN-Kubernetes | OpenShift SDN |
|---|---|
|
|
For more information on using multicast in OVN-Kubernetes, see "Enabling multicast for a project".
24.6.1.1.7. Network policies Link kopierenLink in die Zwischenablage kopiert!
OVN-Kubernetes fully supports the Kubernetes
NetworkPolicy
networking.k8s.io/v1
24.6.1.2. How the migration process works Link kopierenLink in die Zwischenablage kopiert!
The following table summarizes the migration process by segmenting between the user-initiated steps in the process and the actions that the migration performs in response.
| User-initiated steps | Migration activity |
|---|---|
| Set the
|
|
| Update the
|
|
| Reboot each node in the cluster. |
|
If a rollback to OpenShift SDN is required, the following table describes the process.
You must wait until the migration process from OpenShift SDN to OVN-Kubernetes network plugin is successful before initiating a rollback.
| User-initiated steps | Migration activity |
|---|---|
| Suspend the MCO to ensure that it does not interrupt the migration. | The MCO stops. |
| Set the
|
|
| Update the
|
|
| Reboot each node in the cluster. |
|
| Enable the MCO after all nodes in the cluster reboot. |
|
24.6.1.3. Using an Ansible playbook to migrate to the OVN-Kubernetes network plugin Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can use an Ansible collection,
network.offline_migration_sdn_to_ovnk
-
: Includes playbooks that execute in a sequence where each playbook represents a step in the migration process.
playbooks/playbook-migration.yml -
: Includes playbooks that execute in a sequence where each playbook represents a step in the rollback process.
playbooks/playbook-rollback.yml
Prerequisites
-
You installed the package, minimum version 3.10.
python3 -
You installed the and
jmespathpackages.jq - You logged in to the Red Hat Hybrid Cloud Console and opened the Ansible Automation Platform web console.
-
You created a security group rule that allows User Datagram Protocol (UDP) packets on port for all nodes on all cloud platforms. If you do not do this task, your cluster might fail to schedule pods.
6081 Check if your cluster uses static routes or routing policies in the host network.
-
If true, a later procedure step requires that you set the parameter to
routingViaHostand thetrueparameter toipForwardingin theGlobalsection of thegatewayConfigfile.playbooks/playbook-migration.yml
-
If true, a later procedure step requires that you set the
-
If the OpenShift-SDN plugin uses the and
100.64.0.0/16address ranges, you patched the address ranges. For more information, see "Patching OVN-Kubernetes address ranges" in the Additional resources section.100.88.0.0/16
Procedure
Install the
package, minimum version 2.15. The following example command shows how to install theansible-corepackage on Red Hat Enterprise Linux (RHEL):ansible-core$ sudo dnf install -y ansible-coreCreate an
file and add information similar to the following example to the file. Ensure that file exists in the same directory as where theansible.cfgcommands and the playbooks run.ansible-galaxy$ cat << EOF >> ansible.cfg [galaxy] server_list = automation_hub, validated [galaxy_server.automation_hub] url=https://console.redhat.com/api/automation-hub/content/published/ auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token token= #[galaxy_server.release_galaxy] #url=https://galaxy.ansible.com/ [galaxy_server.validated] url=https://console.redhat.com/api/automation-hub/content/validated/ auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token token= EOFFrom the Ansible Automation Platform web console, go to the Connect to Hub page and complete the following steps:
- In the Offline token section of the page, click the Load token button.
- After the token loads, click the Copy to clipboard icon.
-
Open the file and paste the API token in the
ansible.cfgparameter. The API token is required for authenticating against the server URL specified in thetoken=file.ansible.cfg
Install the
Ansible collection by entering the followingnetwork.offline_migration_sdn_to_ovnkcommand:ansible-galaxy$ ansible-galaxy collection install network.offline_migration_sdn_to_ovnkVerify that the
Ansible collection is installed on your system:network.offline_migration_sdn_to_ovnk$ ansible-galaxy collection list | grep network.offline_migration_sdn_to_ovnkExample output
network.offline_migration_sdn_to_ovnk 1.0.2The
Ansible collection is saved in the default path ofnetwork.offline_migration_sdn_to_ovnk.~/.ansible/collections/ansible_collections/network/offline_migration_sdn_to_ovnk/Configure migration features in the
file:playbooks/playbook-migration.yml# ... migration_interface_name: eth0 migration_disable_auto_migration: true migration_egress_ip: false migration_egress_firewall: false migration_multicast: false migration_routing_via_host: true migration_ip_forwarding: Global migration_cidr: "10.240.0.0/14" migration_prefix: 23 migration_mtu: 1400 migration_geneve_port: 6081 migration_ipv4_subnet: "100.64.0.0/16" # ...migration_interface_name-
If you use an
NodeNetworkConfigurationPolicy(NNCP) resource on a primary interface, specify the interface name in themigration-playbook.ymlfile so that the NNCP resource gets deleted on the primary interface during the migration process. migration_disable_auto_migration-
Disables the auto-migration of OpenShift SDN CNI plug-in features to the OVN-Kubernetes plugin. If you disable auto-migration of features, you must also set the
migration_egress_ip,migration_egress_firewall, andmigration_multicastparameters tofalse. If you need to enable auto-migration of features, set the parameter tofalse. migration_routing_via_host-
Set to
trueto configure local gateway mode orfalseto configure shared gateway mode for nodes in your cluster. The default value isfalse. In local gateway mode, traffic is routed through the host network stack. In shared gateway mode, traffic is not routed through the host network stack. migration_ip_forwarding-
If you configured local gateway mode, set IP forwarding to
Globalif you need the host network of the node to act as a router for traffic not related to OVN-Kubernetes. migration_cidr-
Specifies a Classless Inter-Domain Routing (CIDR) IP address block for your cluster. You cannot use any CIDR block that overlaps the
100.64.0.0/16CIDR block, because the OVN-Kubernetes network provider uses this block internally. migration_prefix- Ensure that you specify a prefix value, which is the slice of the CIDR block apportioned to each node in your cluster.
migration_mtu- Optional parameter that sets a specific maximum transmission unit (MTU) to your cluster network after the migration process.
migration_geneve_port-
Optional parameter that sets a Geneve port for OVN-Kubernetes. The default port is
6081. migration_ipv4_subnet-
Optional parameter that sets an IPv4 address range for internal use by OVN-Kubernetes. The default value for the parameter is
100.64.0.0/16.
To run the
file, enter the following command:playbooks/playbook-migration.yml$ ansible-playbook -v playbooks/playbook-migration.yml
24.6.2. Migrating to the OVN-Kubernetes network plugin Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can change the network plugin for your cluster to OVN-Kubernetes. During the migration, you must reboot every node in your cluster.
While performing the migration, your cluster is unavailable and workloads might be interrupted. Perform the migration only when an interruption in service is acceptable.
Prerequisites
- You have a cluster configured with the OpenShift SDN CNI network plugin in the network policy isolation mode.
-
You installed the OpenShift CLI ().
oc -
You have access to the cluster as a user with the role.
cluster-admin - You have a recent backup of the etcd database.
- You can manually reboot each node.
- You checked that your cluster is in a known good state without any errors.
-
You created a security group rule that allows User Datagram Protocol (UDP) packets on port for all nodes on all cloud platforms.
6081 - You removed webhooks. Alternatively, you can set a timeout value for each webhook, which is detailed in the procedure. If you did not complete one of these tasks, your cluster might fail to schedule pods.
Procedure
If you did not remove webhooks, set the timeout value for each webhook to
seconds by creating a3custom resource and then specify the timeout value for theValidatingWebhookConfigurationparameter:timeoutSecondsoc patch ValidatingWebhookConfiguration <webhook_name> --type='json' \1 -p '[{"op": "replace", "path": "/webhooks/0/timeoutSeconds", "value": 3}]'- 1
- Where
<webhook_name>is the name of your webhook.
To backup the configuration for the cluster network, enter the following command:
$ oc get Network.config.openshift.io cluster -o yaml > cluster-openshift-sdn.yamlVerify that the
environment variable is set and is equal toOVN_SDN_MIGRATION_TIMEOUTby running the following command:0s#!/bin/bash if [ -n "$OVN_SDN_MIGRATION_TIMEOUT" ] && [ "$OVN_SDN_MIGRATION_TIMEOUT" = "0s" ]; then unset OVN_SDN_MIGRATION_TIMEOUT fi #loops the timeout command of the script to repeatedly check the cluster Operators until all are available. co_timeout=${OVN_SDN_MIGRATION_TIMEOUT:-1200s} timeout "$co_timeout" bash <<EOT until oc wait co --all --for='condition=AVAILABLE=True' --timeout=10s && \ oc wait co --all --for='condition=PROGRESSING=False' --timeout=10s && \ oc wait co --all --for='condition=DEGRADED=False' --timeout=10s; do sleep 10 echo "Some ClusterOperators Degraded=False,Progressing=True,or Available=False"; done EOTRemove the configuration from the Cluster Network Operator (CNO) configuration object by running the following command:
$ oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{"spec":{"migration":null}}'. Delete the
(NNCP) custom resource (CR) that defines the primary network interface for the OpenShift SDN network plugin by completing the following steps:NodeNetworkConfigurationPolicyCheck that the existing NNCP CR bonded the primary interface to your cluster by entering the following command:
$ oc get nncpExample output
NAME STATUS REASON bondmaster0 Available SuccessfullyConfiguredNetwork Manager stores the connection profile for the bonded primary interface in the
system path./etc/NetworkManager/system-connectionsRemove the NNCP from your cluster:
$ oc delete nncp <nncp_manifest_filename>
To prepare all the nodes for the migration, set the
field on the CNO configuration object by running the following command:migration$ oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": { "networkType": "OVNKubernetes" } } }'NoteThis step does not deploy OVN-Kubernetes immediately. Instead, specifying the
field triggers the Machine Config Operator (MCO) to apply new machine configs to all the nodes in the cluster in preparation for the OVN-Kubernetes deployment.migrationCheck that the reboot is finished by running the following command:
$ oc get mcpCheck that all cluster Operators are available by running the following command:
$ oc get coAlternatively: You can disable automatic migration of several OpenShift SDN capabilities to the OVN-Kubernetes equivalents:
- Egress IPs
- Egress firewall
- Multicast
To disable automatic migration of the configuration for any of the previously noted OpenShift SDN features, specify the following keys:
$ oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": { "networkType": "OVNKubernetes", "features": { "egressIP": <bool>, "egressFirewall": <bool>, "multicast": <bool> } } } }'where:
: Specifies whether to enable migration of the feature. The default isbool.true
Optional: You can customize the following settings for OVN-Kubernetes to meet your network infrastructure requirements:
Maximum transmission unit (MTU). Consider the following before customizing the MTU for this optional step:
- If you use the default MTU, and you want to keep the default MTU during migration, this step can be ignored.
- If you used a custom MTU, and you want to keep the custom MTU during migration, you must declare the custom MTU value in this step.
This step does not work if you want to change the MTU value during migration. Instead, you must first follow the instructions for "Changing the cluster MTU". You can then keep the custom MTU value by performing this procedure and declaring the custom MTU value in this step.
NoteOpenShift-SDN and OVN-Kubernetes have different overlay overhead. MTU values should be selected by following the guidelines found on the "MTU value selection" page.
- Geneve (Generic Network Virtualization Encapsulation) overlay network port
- OVN-Kubernetes IPv4 internal subnet
To customize either of the previously noted settings, enter and customize the following command. If you do not need to change the default value, omit the key from the patch.
$ oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{ "spec":{ "defaultNetwork":{ "ovnKubernetesConfig":{ "mtu":<mtu>, "genevePort":<port>, "v4InternalSubnet":"<ipv4_subnet>" }}}}'where:
mtu-
The MTU for the Geneve overlay network. This value is normally configured automatically, but if the nodes in your cluster do not all use the same MTU, then you must set this explicitly to
100less than the smallest node MTU value. port-
The UDP port for the Geneve overlay network. If a value is not specified, the default is
6081. The port cannot be the same as the VXLAN port that is used by OpenShift SDN. The default value for the VXLAN port is4789. ipv4_subnet-
An IPv4 address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is
100.64.0.0/16.
Example patch command to update
mtufield$ oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{ "spec":{ "defaultNetwork":{ "ovnKubernetesConfig":{ "mtu":1200 }}}}'As the MCO updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command:
$ oc get mcpA successfully updated node has the following status:
,UPDATED=true,UPDATING=false.DEGRADED=falseNoteBy default, the MCO updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster.
Confirm the status of the new machine configuration on the hosts:
To list the machine configuration state and the name of the applied machine configuration, enter the following command:
$ oc describe node | egrep "hostname|machineconfig"Example output
kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: DoneVerify that the following statements are true:
-
The value of field is
machineconfiguration.openshift.io/state.Done -
The value of the field is equal to the value of the
machineconfiguration.openshift.io/currentConfigfield.machineconfiguration.openshift.io/desiredConfig
-
The value of
To confirm that the machine config is correct, enter the following command:
$ oc get machineconfig <config_name> -o yaml | grep ExecStartwhere
is the name of the machine config from the<config_name>field.machineconfiguration.openshift.io/currentConfigThe machine config must include the following update to the systemd configuration:
ExecStart=/usr/local/bin/configure-ovs.sh OVNKubernetesIf a node is stuck in the
state, investigate the machine config daemon pod logs and resolve any errors.NotReadyTo list the pods, enter the following command:
$ oc get pod -n openshift-machine-config-operatorExample output
NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43hThe names for the config daemon pods are in the following format:
. Themachine-config-daemon-<seq>value is a random five character alphanumeric sequence.<seq>Display the pod log for the first machine config daemon pod shown in the previous output by enter the following command:
$ oc logs <pod> -n openshift-machine-config-operatorwhere
is the name of a machine config daemon pod.pod- Resolve any errors in the logs shown by the output from the previous command.
To start the migration, configure the OVN-Kubernetes network plugin by using one of the following commands:
To specify the network provider without changing the cluster network IP address block, enter the following command:
$ oc patch Network.config.openshift.io cluster \ --type='merge' --patch '{ "spec": { "networkType": "OVNKubernetes" } }'To specify a different cluster network IP address block, enter the following command:
$ oc patch Network.config.openshift.io cluster \ --type='merge' --patch '{ "spec": { "clusterNetwork": [ { "cidr": "<cidr>", "hostPrefix": <prefix> } ], "networkType": "OVNKubernetes" } }'where
is a CIDR block andcidris the slice of the CIDR block apportioned to each node in your cluster. You cannot use any CIDR block that overlaps with theprefixCIDR block because the OVN-Kubernetes network provider uses this block internally.100.64.0.0/16ImportantYou cannot change the service network address block during the migration.
Verify that the Multus daemon set rollout is complete before continuing with subsequent steps:
$ oc -n openshift-multus rollout status daemonset/multusThe name of the Multus pods is in the form of
wheremultus-<xxxxx>is a random sequence of letters. It might take several moments for the pods to restart.<xxxxx>Example output
Waiting for daemon set "multus" rollout to finish: 1 out of 6 new pods have been updated... ... Waiting for daemon set "multus" rollout to finish: 5 of 6 updated pods are available... daemon set "multus" successfully rolled outTo complete changing the network plugin, reboot each node in your cluster. You can reboot the nodes in your cluster with either of the following approaches:
ImportantThe following scripts reboot all of the nodes in the cluster at the same time. This can cause your cluster to be unstable. Another option is to reboot your nodes manually one at a time. Rebooting nodes one-by-one causes considerable downtime in a cluster with many nodes.
Cluster Operators will not work correctly before you reboot the nodes.
With the
command, you can use a bash script similar to the following:oc rsh#!/bin/bash readarray -t POD_NODES <<< "$(oc get pod -n openshift-machine-config-operator -o wide| grep daemon|awk '{print $1" "$7}')" for i in "${POD_NODES[@]}" do read -r POD NODE <<< "$i" until oc rsh -n openshift-machine-config-operator "$POD" chroot /rootfs shutdown -r +1 do echo "cannot reboot node $NODE, retry" && sleep 3 done doneWith the
command, you can use a bash script similar to the following. The script assumes that you have configured sudo to not prompt for a password.ssh#!/bin/bash for ip in $(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}') do echo "reboot node $ip" ssh -o StrictHostKeyChecking=no core@$ip sudo shutdown -r -t 3 done
Confirm that the migration succeeded:
To confirm that the network plugin is OVN-Kubernetes, enter the following command. The value of
must bestatus.networkType.OVNKubernetes$ oc get network.config/cluster -o jsonpath='{.status.networkType}{"\n"}'To confirm that the cluster nodes are in the
state, enter the following command:Ready$ oc get nodesTo confirm that your pods are not in an error state, enter the following command:
$ oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}'If pods on a node are in an error state, reboot that node.
To confirm that all of the cluster Operators are not in an abnormal state, enter the following command:
$ oc get coThe status of every cluster Operator must be the following:
,AVAILABLE="True",PROGRESSING="False". If a cluster Operator is not available or degraded, check the logs for the cluster Operator for more information.DEGRADED="False"
Complete the following steps only if the migration succeeds and your cluster is in a good state:
To remove the migration configuration from the CNO configuration object, enter the following command:
$ oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": null } }'To remove custom configuration for the OpenShift SDN network provider, enter the following command:
$ oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "defaultNetwork": { "openshiftSDNConfig": null } } }'To remove the OpenShift SDN network provider namespace, enter the following command:
$ oc delete namespace openshift-sdnAfter a successful migration operation, remove the
annotation from thenetwork.openshift.io/network-type-migration-custom resource by entering the following command:network.config$ oc annotate network.config cluster network.openshift.io/network-type-migration-
Next steps
- Optional: After cluster migration, you can convert your IPv4 single-stack cluster to a dual-network cluster network that supports IPv4 and IPv6 address families. For more information, see "Converting to IPv4/IPv6 dual-stack networking".
24.6.4. Understanding changes to external IP behavior in OVN-Kubernetes Link kopierenLink in die Zwischenablage kopiert!
When migrating from OpenShift SDN to OVN-Kubernetes (OVN-K), services that use external IPs might become inaccessible across namespaces due to network policy enforcement.
In OpenShift SDN, external IPs were accessible across namespaces by default. However, in OVN-K, network policies strictly enforce multitenant isolation, preventing access to services exposed via external IPs from other namespaces.
To ensure access, consider the following alternatives:
- Use an ingress or route: Instead of exposing services by using external IPs, configure an ingress or route to allow external access while maintaining security controls.
-
Adjust the custom resource (CR): Modify a
NetworkPolicyCR to explicitly allow access from required namespaces and ensure that traffic is allowed to the designated service ports. Without explicitly allowing traffic to the required ports, access might still be blocked, even if the namespace is allowed.NetworkPolicy -
Use a service: If applicable, deploy a
LoadBalancerservice instead of relying on external IPs. For more information about configuring see "NetworkPolicy and external IPs in OVN-Kubernetes".LoadBalancer
24.7. Rolling back to the OpenShift SDN network provider Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can rollback to the OpenShift SDN from the OVN-Kubernetes network plugin only after the migration to the OVN-Kubernetes network plugin is completed and successful.
24.7.1. Migrating to the OpenShift SDN network plugin Link kopierenLink in die Zwischenablage kopiert!
Cluster administrators can roll back to the OpenShift SDN Container Network Interface (CNI) network plugin by using the offline migration method. During the migration you must manually reboot every node in your cluster. With the offline migration method, there is some downtime, during which your cluster is unreachable.
You must wait until the migration process from OpenShift SDN to OVN-Kubernetes network plugin is successful before initiating a rollback.
Prerequisites
-
Install the OpenShift CLI ().
oc -
Access to the cluster as a user with the role.
cluster-admin - A cluster installed on infrastructure configured with the OVN-Kubernetes network plugin.
- A recent backup of the etcd database is available.
- A reboot can be triggered manually for each node.
- The cluster is in a known good state, without any errors.
Procedure
Stop all of the machine configuration pools managed by the Machine Config Operator (MCO):
Stop the
configuration pool by entering the following command in your CLI:master$ oc patch MachineConfigPool master --type='merge' --patch \ '{ "spec": { "paused": true } }'Stop the
machine configuration pool by entering the following command in your CLI:worker$ oc patch MachineConfigPool worker --type='merge' --patch \ '{ "spec":{ "paused": true } }'
To prepare for the migration, set the migration field to
by entering the following command in your CLI:null$ oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": null } }'Check that the migration status is empty for the
object by entering the following command in your CLI. Empty command output indicates that the object is not in a migration operation.Network.config.openshift.io$ oc get Network.config cluster -o jsonpath='{.status.migration}'Apply the patch to the
object to set the network plugin back to OpenShift SDN by entering the following command in your CLI:Network.operator.openshift.io$ oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": { "networkType": "OpenShiftSDN" } } }'ImportantIf you applied the patch to the
object before the patch operation finalizes on theNetwork.config.openshift.ioobject, the Cluster Network Operator (CNO) enters into a degradation state and this causes a slight delay until the CNO recovers from the degraded state.Network.operator.openshift.ioConfirm that the migration status of the network plugin for the
object isNetwork.config.openshift.io clusterby entering the following command in your CLI:OpenShiftSDN$ oc get Network.config cluster -o jsonpath='{.status.migration.networkType}'Apply the patch to the
object to set the network plugin back to OpenShift SDN by entering the following command in your CLI:Network.config.openshift.io$ oc patch Network.config.openshift.io cluster --type='merge' \ --patch '{ "spec": { "networkType": "OpenShiftSDN" } }'Optional: Disable automatic migration of several OVN-Kubernetes capabilities to the OpenShift SDN equivalents:
- Egress IPs
- Egress firewall
- Multicast
To disable automatic migration of the configuration for any of the previously noted OpenShift SDN features, specify the following keys:
$ oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": { "networkType": "OpenShiftSDN", "features": { "egressIP": <bool>, "egressFirewall": <bool>, "multicast": <bool> } } } }'where:
: Specifies whether to enable migration of the feature. The default isbool.trueOptional: You can customize the following settings for OpenShift SDN to meet your network infrastructure requirements:
- Maximum transmission unit (MTU)
- VXLAN port
To customize either or both of the previously noted settings, customize and enter the following command in your CLI. If you do not need to change the default value, omit the key from the patch.
$ oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{ "spec":{ "defaultNetwork":{ "openshiftSDNConfig":{ "mtu":<mtu>, "vxlanPort":<port> }}}}'mtu-
The MTU for the VXLAN overlay network. This value is normally configured automatically, but if the nodes in your cluster do not all use the same MTU, then you must set this explicitly to
50less than the smallest node MTU value. port-
The UDP port for the VXLAN overlay network. If a value is not specified, the default is
4789. The port cannot be the same as the Geneve port that is used by OVN-Kubernetes. The default value for the Geneve port is6081.
Example patch command
$ oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{ "spec":{ "defaultNetwork":{ "openshiftSDNConfig":{ "mtu":1200 }}}}'Reboot each node in your cluster. You can reboot the nodes in your cluster with either of the following approaches:
With the
command, you can use a bash script similar to the following:oc rsh#!/bin/bash readarray -t POD_NODES <<< "$(oc get pod -n openshift-machine-config-operator -o wide| grep daemon|awk '{print $1" "$7}')" for i in "${POD_NODES[@]}" do read -r POD NODE <<< "$i" until oc rsh -n openshift-machine-config-operator "$POD" chroot /rootfs shutdown -r +1 do echo "cannot reboot node $NODE, retry" && sleep 3 done doneWith the
command, you can use a bash script similar to the following. The script assumes that you have configured sudo to not prompt for a password.ssh#!/bin/bash for ip in $(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}') do echo "reboot node $ip" ssh -o StrictHostKeyChecking=no core@$ip sudo shutdown -r -t 3 done
Wait until the Multus daemon set rollout completes. Run the following command to see your rollout status:
$ oc -n openshift-multus rollout status daemonset/multusThe name of the Multus pods is in the form of
wheremultus-<xxxxx>is a random sequence of letters. It might take several moments for the pods to restart.<xxxxx>Example output
Waiting for daemon set "multus" rollout to finish: 1 out of 6 new pods have been updated... ... Waiting for daemon set "multus" rollout to finish: 5 of 6 updated pods are available... daemon set "multus" successfully rolled outAfter the nodes in your cluster have rebooted and the multus pods are rolled out, start all of the machine configuration pools by running the following commands::
Start the master configuration pool:
$ oc patch MachineConfigPool master --type='merge' --patch \ '{ "spec": { "paused": false } }'Start the worker configuration pool:
$ oc patch MachineConfigPool worker --type='merge' --patch \ '{ "spec": { "paused": false } }'
As the MCO updates machines in each config pool, it reboots each node.
By default the MCO updates a single machine per pool at a time, so the time that the migration requires to complete grows with the size of the cluster.
Confirm the status of the new machine configuration on the hosts:
To list the machine configuration state and the name of the applied machine configuration, enter the following command in your CLI:
$ oc describe node | egrep "hostname|machineconfig"Example output
kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: DoneVerify that the following statements are true:
-
The value of field is
machineconfiguration.openshift.io/state.Done -
The value of the field is equal to the value of the
machineconfiguration.openshift.io/currentConfigfield.machineconfiguration.openshift.io/desiredConfig
-
The value of
To confirm that the machine config is correct, enter the following command in your CLI:
$ oc get machineconfig <config_name> -o yamlwhere
is the name of the machine config from the<config_name>field.machineconfiguration.openshift.io/currentConfig
Confirm that the migration succeeded:
To confirm that the network plugin is OpenShift SDN, enter the following command in your CLI. The value of
must bestatus.networkType.OpenShiftSDN$ oc get Network.config/cluster -o jsonpath='{.status.networkType}{"\n"}'To confirm that the cluster nodes are in the
state, enter the following command in your CLI:Ready$ oc get nodesIf a node is stuck in the
state, investigate the machine config daemon pod logs and resolve any errors.NotReadyTo list the pods, enter the following command in your CLI:
$ oc get pod -n openshift-machine-config-operatorExample output
NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43hThe names for the config daemon pods are in the following format:
. Themachine-config-daemon-<seq>value is a random five character alphanumeric sequence.<seq>To display the pod log for each machine config daemon pod shown in the previous output, enter the following command in your CLI:
$ oc logs <pod> -n openshift-machine-config-operatorwhere
is the name of a machine config daemon pod.pod- Resolve any errors in the logs shown by the output from the previous command.
To confirm that your pods are not in an error state, enter the following command in your CLI:
$ oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}'If pods on a node are in an error state, reboot that node.
Complete the following steps only if the migration succeeds and your cluster is in a good state:
To remove the migration configuration from the Cluster Network Operator configuration object, enter the following command in your CLI:
$ oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": null } }'To remove the OVN-Kubernetes configuration, enter the following command in your CLI:
$ oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "defaultNetwork": { "ovnKubernetesConfig":null } } }'To remove the OVN-Kubernetes network provider namespace, enter the following command in your CLI:
$ oc delete namespace openshift-ovn-kubernetes
24.7.2. Using an Ansible playbook to roll back to the OpenShift SDN network plugin Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can use the
playbooks/playbook-rollback.yml
network.offline_migration_sdn_to_ovnk
Prerequisites
-
You installed the package, minimum version 3.10.
python3 -
You installed the and
jmespathpackages.jq - You logged in to the Red Hat Hybrid Cloud Console and opened the Ansible Automation Platform web console.
-
You created a security group rule that allows User Datagram Protocol (UDP) packets on port for all nodes on all cloud platforms. If you do not do this task, your cluster might fail to schedule pods.
6081
Procedure
Install the
package, minimum version 2.15. The following example command shows how to install theansible-corepackage on Red Hat Enterprise Linux (RHEL):ansible-core$ sudo dnf install -y ansible-coreCreate an
file and add information similar to the following example to the file. Ensure that file exists in the same directory as where theansible.cfgcommands and the playbooks run.ansible-galaxy$ cat << EOF >> ansible.cfg [galaxy] server_list = automation_hub, validated [galaxy_server.automation_hub] url=https://console.redhat.com/api/automation-hub/content/published/ auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token token= #[galaxy_server.release_galaxy] #url=https://galaxy.ansible.com/ [galaxy_server.validated] url=https://console.redhat.com/api/automation-hub/content/validated/ auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token token= EOFFrom the Ansible Automation Platform web console, go to the Connect to Hub page and complete the following steps:
- In the Offline token section of the page, click the Load token button.
- After the token loads, click the Copy to clipboard icon.
-
Open the file and paste the API token in the
ansible.cfgparameter. The API token is required for authenticating against the server URL specified in thetoken=file.ansible.cfg
Install the
Ansible collection by entering the followingnetwork.offline_migration_sdn_to_ovnkcommand:ansible-galaxy$ ansible-galaxy collection install network.offline_migration_sdn_to_ovnkVerify that the
Ansible collection is installed on your system:network.offline_migration_sdn_to_ovnk$ ansible-galaxy collection list | grep network.offline_migration_sdn_to_ovnkExample output
network.offline_migration_sdn_to_ovnk 1.0.2The
Ansible collection is saved in the default path ofnetwork.offline_migration_sdn_to_ovnk.~/.ansible/collections/ansible_collections/network/offline_migration_sdn_to_ovnk/Configure rollback features in the
file:playbooks/playbook-migration.yml# ... rollback_disable_auto_migration: true rollback_egress_ip: false rollback_egress_firewall: false rollback_multicast: false rollback_mtu: 1400 rollback_vxlanPort: 4790 # ...rollback_disable_auto_migration-
Disables the auto-migration of OVN-Kubernetes plug-in features to the OpenShift SDN CNI plug-in. If you disable auto-migration of features, you must also set the
rollback_egress_ip,rollback_egress_firewall, androllback_multicastparameters tofalse. If you need to enable auto-migration of features, set the parameter tofalse. rollback_mtu- Optional parameter that sets a specific maximum transmission unit (MTU) to your cluster network after the migration process.
rollback_vxlanPort-
Optional parameter that sets a VXLAN (Virtual Extensible LAN) port for use by OpenShift SDN CNI plug-in. The default value for the parameter is
4790.
To run the
file, enter the following command:playbooks/playbook-rollback.yml$ ansible-playbook -v playbooks/playbook-rollback.yml
24.8. Migrating from the Kuryr network plugin to the OVN-Kubernetes network plugin Link kopierenLink in die Zwischenablage kopiert!
As the administrator of a cluster that runs on Red Hat OpenStack Platform (RHOSP), you can migrate to the OVN-Kubernetes network plugin from the Kuryr SDN network plugin.
To learn more about OVN-Kubernetes, read About the OVN-Kubernetes network plugin.
24.8.1. Migration to the OVN-Kubernetes network provider Link kopierenLink in die Zwischenablage kopiert!
You can manually migrate a cluster that runs on Red Hat OpenStack Platform (RHOSP) to the OVN-Kubernetes network provider.
Migration to OVN-Kubernetes is a one-way process. During migration, your cluster will be unreachable for a brief time.
24.8.1.1. Considerations when migrating to the OVN-Kubernetes network provider Link kopierenLink in die Zwischenablage kopiert!
Kubernetes namespaces are kept by Kuryr in separate RHOSP networking service (Neutron) subnets. Those subnets and the IP addresses that are assigned to individual pods are not preserved during the migration.
24.8.1.2. How the migration process works Link kopierenLink in die Zwischenablage kopiert!
The following table summarizes the migration process by relating the steps that you perform with the actions that your cluster and Operators take.
| User-initiated steps | Migration activity |
|---|---|
| Set the
|
|
| Update the
|
|
| Reboot each node in the cluster. |
|
| Clean up remaining resources Kuryr controlled. |
|
24.8.2. Migrating to the OVN-Kubernetes network plugin Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can change the network plugin for your cluster to OVN-Kubernetes.
During the migration, you must reboot every node in your cluster. Your cluster is unavailable and workloads might be interrupted. Perform the migration only if an interruption in service is acceptable.
Prerequisites
-
You installed the OpenShift CLI ().
oc -
You have access to the cluster as a user with the role.
cluster-admin - You have a recent backup of the etcd database is available.
- You can manually reboot each node.
- The cluster you plan to migrate is in a known good state, without any errors.
- You installed the Python interpreter.
-
You installed the python package.
openstacksdk -
You installed the CLI tool.
openstack - You have access to the underlying RHOSP cloud.
Procedure
Back up the configuration for the cluster network by running the following command:
$ oc get Network.config.openshift.io cluster -o yaml > cluster-kuryr.yamlTo set the
variable, run the following command:CLUSTERID$ CLUSTERID=$(oc get infrastructure.config.openshift.io cluster -o=jsonpath='{.status.infrastructureName}')To prepare all the nodes for the migration, set the
field on the Cluster Network Operator configuration object by running the following command:migration$ oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{"spec": {"migration": {"networkType": "OVNKubernetes"}}}'NoteThis step does not deploy OVN-Kubernetes immediately. Specifying the
field triggers the Machine Config Operator (MCO) to apply new machine configs to all the nodes in the cluster. This prepares the cluster for the OVN-Kubernetes deployment.migrationOptional: Customize the following settings for OVN-Kubernetes for your network infrastructure requirements:
- Maximum transmission unit (MTU)
- Geneve (Generic Network Virtualization Encapsulation) overlay network port
- OVN-Kubernetes IPv4 internal subnet
- OVN-Kubernetes IPv6 internal subnet
To customize these settings, enter and customize the following command:
$ oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{ "spec":{ "defaultNetwork":{ "ovnKubernetesConfig":{ "mtu":<mtu>, "genevePort":<port>, "v4InternalSubnet":"<ipv4_subnet>", "v6InternalSubnet":"<ipv6_subnet>" }}}}'where:
mtu-
Specifies the MTU for the Geneve overlay network. This value is normally configured automatically, but if the nodes in your cluster do not all use the same MTU, then you must set this explicitly to
100less than the smallest node MTU value. port-
Specifies the UDP port for the Geneve overlay network. If a value is not specified, the default is
6081. The port cannot be the same as the VXLAN port that is used by Kuryr. The default value for the VXLAN port is4789. ipv4_subnet-
Specifies an IPv4 address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is
100.64.0.0/16. ipv6_subnet-
Specifies an IPv6 address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is
fd98::/48.
If you do not need to change the default value, omit the key from the patch.
Example patch command to update
mtufield$ oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{ "spec":{ "defaultNetwork":{ "ovnKubernetesConfig":{ "mtu":1200 }}}}'Check the machine config pool status by entering the following command:
$ oc get mcpWhile the MCO updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated before continuing.
A successfully updated node has the following status:
,UPDATED=true,UPDATING=false.DEGRADED=falseNoteBy default, the MCO updates one machine per pool at a time. Large clusters take more time to migrate than small clusters.
Confirm the status of the new machine configuration on the hosts:
To list the machine configuration state and the name of the applied machine configuration, enter the following command:
$ oc describe node | egrep "hostname|machineconfig"Example output
kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b1 machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b2 machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: DoneReview the output from the previous step. The following statements must be true:
-
The value of field is
machineconfiguration.openshift.io/state.Done -
The value of the field is equal to the value of the
machineconfiguration.openshift.io/currentConfigfield.machineconfiguration.openshift.io/desiredConfig
-
The value of
To confirm that the machine config is correct, enter the following command:
$ oc get machineconfig <config_name> -o yaml | grep ExecStartwhere:
- <config_name>
Specifies the name of the machine config from the
field.machineconfiguration.openshift.io/currentConfigThe machine config must include the following update to the systemd configuration:
Example output
ExecStart=/usr/local/bin/configure-ovs.sh OVNKubernetes
If a node is stuck in the
state, investigate the machine config daemon pod logs and resolve any errors:NotReadyTo list the pods, enter the following command:
$ oc get pod -n openshift-machine-config-operatorExample output
NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43hThe names for the config daemon pods are in the following format:
. Themachine-config-daemon-<seq>value is a random five character alphanumeric sequence.<seq>Display the pod log for the first machine config daemon pod shown in the previous output by enter the following command:
$ oc logs <pod> -n openshift-machine-config-operatorwhere:
- <pod>
- Specifies the name of a machine config daemon pod.
- Resolve any errors in the logs shown by the output from the previous command.
To start the migration, configure the OVN-Kubernetes network plugin by using one of the following commands:
To specify the network provider without changing the cluster network IP address block, enter the following command:
$ oc patch Network.config.openshift.io cluster --type=merge \ --patch '{"spec": {"networkType": "OVNKubernetes"}}'To specify a different cluster network IP address block, enter the following command:
$ oc patch Network.config.openshift.io cluster \ --type='merge' --patch '{ "spec": { "clusterNetwork": [ { "cidr": "<cidr>", "hostPrefix": "<prefix>" } ] "networkType": "OVNKubernetes" } }'where:
- <cidr>
- Specifies a CIDR block.
- <prefix>
Specifies a slice of the CIDR block that is apportioned to each node in your cluster.
ImportantYou cannot change the service network address block during the migration.
You cannot use any CIDR block that overlaps with the
CIDR block because the OVN-Kubernetes network provider uses this block internally.100.64.0.0/16
To complete the migration, reboot each node in your cluster. For example, you can use a bash script similar to the following example. The script assumes that you can connect to each host by using
and that you have configuredsshto not prompt for a password:sudo#!/bin/bash for ip in $(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}') do echo "reboot node $ip" ssh -o StrictHostKeyChecking=no core@$ip sudo shutdown -r -t 3 doneNoteIf SSH access is not available, you can use the
command:openstack$ for name in $(openstack server list --name "${CLUSTERID}*" -f value -c Name); do openstack server reboot "${name}"; doneAlternatively, you might be able to reboot each node through the management portal for your infrastructure provider. Otherwise, contact the appropriate authority to either gain access to the virtual machines through SSH or the management portal and OpenStack client.
Verification
Confirm that the migration succeeded, and then remove the migration resources:
To confirm that the network plugin is OVN-Kubernetes, enter the following command.
$ oc get network.config/cluster -o jsonpath='{.status.networkType}{"\n"}'The value of
must bestatus.networkType.OVNKubernetesTo confirm that the cluster nodes are in the
state, enter the following command:Ready$ oc get nodesTo confirm that your pods are not in an error state, enter the following command:
$ oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}'If pods on a node are in an error state, reboot that node.
To confirm that all of the cluster Operators are not in an abnormal state, enter the following command:
$ oc get coThe status of every cluster Operator must be the following:
,AVAILABLE="True",PROGRESSING="False". If a cluster Operator is not available or degraded, check the logs for the cluster Operator for more information.DEGRADED="False"ImportantDo not proceed if any of the previous verification steps indicate errors. You might encounter pods that have a
state due to finalizers that are removed during clean up. They are not an error indication.Terminating
If the migration completed and your cluster is in a good state, remove the migration configuration from the CNO configuration object by entering the following command:
$ oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{"spec": {"migration": null}}'
24.8.3. Cleaning up resources after migration Link kopierenLink in die Zwischenablage kopiert!
After migration from the Kuryr network plugin to the OVN-Kubernetes network plugin, you must clean up the resources that Kuryr created previously.
The clean up process relies on a Python virtual environment to ensure that the package versions that you use support tags for Octavia objects. You do not need a virtual environment if you are certain that your environment uses at minimum:
-
The Python package version 0.54.0
openstacksdk -
The Python package version 5.5.0
python-openstackclient -
The Python package version 2.3.0
python-octaviaclient
If you decide to use these particular versions, be sure to pull
python-neutronclient
Prerequisites
-
You installed the OpenShift Container Platform CLI ().
oc - You installed a Python interpreter.
-
You installed the Python package.
openstacksdk -
You installed the CLI.
openstack - You have access to the underlying RHOSP cloud.
-
You can access the cluster as a user with the role.
cluster-admin
Procedure
Create a clean-up Python virtual environment:
Create a temporary directory for your environment. For example:
$ python3 -m venv /tmp/venvThe virtual environment located in
directory is used in all clean up examples./tmp/venvEnter the virtual environment. For example:
$ source /tmp/venv/bin/activateUpgrade the
command in the virtual environment by running the following command:pip(venv) $ pip install --upgrade pipInstall the required Python packages by running the following command:
(venv) $ pip install openstacksdk==0.54.0 python-openstackclient==5.5.0 python-octaviaclient==2.3.0 'python-neutronclient<9.0.0'
In your terminal, set variables to cluster and Kuryr identifiers by running the following commands:
Set the cluster ID:
(venv) $ CLUSTERID=$(oc get infrastructure.config.openshift.io cluster -o=jsonpath='{.status.infrastructureName}')Set the cluster tag:
(venv) $ CLUSTERTAG="openshiftClusterID=${CLUSTERID}"Set the router ID:
(venv) $ ROUTERID=$(oc get kuryrnetwork -A --no-headers -o custom-columns=":status.routerId"|uniq)
Create a Bash function that removes finalizers from specified resources by running the following command:
(venv) $ function REMFIN { local resource=$1 local finalizer=$2 for res in $(oc get "${resource}" -A --template='{{range $i,$p := .items}}{{ $p.metadata.name }}|{{ $p.metadata.namespace }}{{"\n"}}{{end}}'); do name=${res%%|*} ns=${res##*|} yaml=$(oc get -n "${ns}" "${resource}" "${name}" -o yaml) if echo "${yaml}" | grep -q "${finalizer}"; then echo "${yaml}" | grep -v "${finalizer}" | oc replace -n "${ns}" "${resource}" "${name}" -f - fi done }The function takes two parameters: the first parameter is name of the resource, and the second parameter is the finalizer to remove. The named resource is removed from the cluster and its definition is replaced with copied data, excluding the specified finalizer.
To remove Kuryr finalizers from services, enter the following command:
(venv) $ REMFIN services kuryr.openstack.org/service-finalizerTo remove the Kuryr
service, enter the following command:service-subnet-gateway-ip(venv) $ if oc get -n openshift-kuryr service service-subnet-gateway-ip &>/dev/null; then oc -n openshift-kuryr delete service service-subnet-gateway-ip fiTo remove all tagged RHOSP load balancers from Octavia, enter the following command:
(venv) $ for lb in $(openstack loadbalancer list --tags "${CLUSTERTAG}" -f value -c id); do openstack loadbalancer delete --cascade "${lb}" doneTo remove Kuryr finalizers from all
CRs, enter the following command:KuryrLoadBalancer(venv) $ REMFIN kuryrloadbalancers.openstack.org kuryr.openstack.org/kuryrloadbalancer-finalizersTo remove the
namespace, enter the following command:openshift-kuryr(venv) $ oc delete namespace openshift-kuryrTo remove the Kuryr service subnet from the router, enter the following command:
(venv) $ openstack router remove subnet "${ROUTERID}" "${CLUSTERID}-kuryr-service-subnet"To remove the Kuryr service network, enter the following command:
(venv) $ openstack network delete "${CLUSTERID}-kuryr-service-network"To remove Kuryr finalizers from all pods, enter the following command:
(venv) $ REMFIN pods kuryr.openstack.org/pod-finalizerTo remove Kuryr finalizers from all
CRs, enter the following command:KuryrPort(venv) $ REMFIN kuryrports.openstack.org kuryr.openstack.org/kuryrport-finalizerThis command deletes the
CRs.KuryrPortTo remove Kuryr finalizers from network policies, enter the following command:
(venv) $ REMFIN networkpolicy kuryr.openstack.org/networkpolicy-finalizerTo remove Kuryr finalizers from remaining network policies, enter the following command:
(venv) $ REMFIN kuryrnetworkpolicies.openstack.org kuryr.openstack.org/networkpolicy-finalizerTo remove subports that Kuryr created from trunks, enter the following command:
(venv) $ mapfile trunks < <(python -c "import openstack; n = openstack.connect().network; print('\n'.join([x.id for x in n.trunks(any_tags='$CLUSTERTAG')]))") && \ i=0 && \ for trunk in "${trunks[@]}"; do trunk=$(echo "$trunk"|tr -d '\n') i=$((i+1)) echo "Processing trunk $trunk, ${i}/${#trunks[@]}." subports=() for subport in $(python -c "import openstack; n = openstack.connect().network; print(' '.join([x['port_id'] for x in n.get_trunk('$trunk').sub_ports if '$CLUSTERTAG' in n.get_port(x['port_id']).tags]))"); do subports+=("$subport"); done args=() for sub in "${subports[@]}" ; do args+=("--subport $sub") done if [ ${#args[@]} -gt 0 ]; then openstack network trunk unset ${args[*]} "${trunk}" fi doneTo retrieve all networks and subnets from
CRs and remove ports, router interfaces and the network itself, enter the following command:KuryrNetwork(venv) $ mapfile -t kuryrnetworks < <(oc get kuryrnetwork -A --template='{{range $i,$p := .items}}{{ $p.status.netId }}|{{ $p.status.subnetId }}{{"\n"}}{{end}}') && \ i=0 && \ for kn in "${kuryrnetworks[@]}"; do i=$((i+1)) netID=${kn%%|*} subnetID=${kn##*|} echo "Processing network $netID, ${i}/${#kuryrnetworks[@]}" # Remove all ports from the network. for port in $(python -c "import openstack; n = openstack.connect().network; print(' '.join([x.id for x in n.ports(network_id='$netID') if x.device_owner != 'network:router_interface']))"); do ( openstack port delete "${port}" ) & # Only allow 20 jobs in parallel. if [[ $(jobs -r -p | wc -l) -ge 20 ]]; then wait -n fi done wait # Remove the subnet from the router. openstack router remove subnet "${ROUTERID}" "${subnetID}" # Remove the network. openstack network delete "${netID}" doneTo remove the Kuryr security group, enter the following command:
(venv) $ openstack security group delete "${CLUSTERID}-kuryr-pods-security-group"To remove all tagged subnet pools, enter the following command:
(venv) $ for subnetpool in $(openstack subnet pool list --tags "${CLUSTERTAG}" -f value -c ID); do openstack subnet pool delete "${subnetpool}" doneTo check that all of the networks based on
CRs were removed, enter the following command:KuryrNetwork(venv) $ networks=$(oc get kuryrnetwork -A --no-headers -o custom-columns=":status.netId") && \ for existingNet in $(openstack network list --tags "${CLUSTERTAG}" -f value -c ID); do if [[ $networks =~ $existingNet ]]; then echo "Network still exists: $existingNet" fi doneIf the command returns any existing networks, intestigate and remove them before you continue.
To remove security groups that are related to network policy, enter the following command:
(venv) $ for sgid in $(openstack security group list -f value -c ID -c Description | grep 'Kuryr-Kubernetes Network Policy' | cut -f 1 -d ' '); do openstack security group delete "${sgid}" doneTo remove finalizers from
CRs, enter the following command:KuryrNetwork(venv) $ REMFIN kuryrnetworks.openstack.org kuryrnetwork.finalizers.kuryr.openstack.orgTo remove the Kuryr router, enter the following command:
(venv) $ if python3 -c "import sys; import openstack; n = openstack.connect().network; r = n.get_router('$ROUTERID'); sys.exit(0) if r.description != 'Created By OpenShift Installer' else sys.exit(1)"; then openstack router delete "${ROUTERID}" fi
24.9. Converting to IPv4/IPv6 dual-stack networking Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can convert your IPv4 single-stack cluster to a dual-network cluster network that supports IPv4 and IPv6 address families. After converting to dual-stack, all newly created pods are dual-stack enabled.
-
While using dual-stack networking, you cannot use IPv4-mapped IPv6 addresses, such as , where IPv6 is required.
::FFFF:198.51.100.1 - A dual-stack network is supported on clusters provisioned on bare metal, IBM Power®, IBM Z® infrastructure, single-node OpenShift, and VMware vSphere.
24.9.1. Converting to a dual-stack cluster network Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can convert your single-stack cluster network to a dual-stack cluster network.
After converting to dual-stack networking only newly created pods are assigned IPv6 addresses. Any pods created before the conversion must be recreated to receive an IPv6 address.
Prerequisites
-
You installed the OpenShift CLI ().
oc -
You are logged in to the cluster with a user with privileges.
cluster-admin - Your cluster uses the OVN-Kubernetes network plugin.
- The cluster nodes have IPv6 addresses.
- You have configured an IPv6-enabled router based on your infrastructure.
Procedure
To specify IPv6 address blocks for the cluster and service networks, create a file containing the following YAML:
- op: add path: /spec/clusterNetwork/- value:1 cidr: fd01::/48 hostPrefix: 64 - op: add path: /spec/serviceNetwork/- value: fd02::/1122 - 1
- Specify an object with the
cidrandhostPrefixparameters. The host prefix must be64or greater. The IPv6 Classless Inter-Domain Routing (CIDR) prefix must be large enough to accommodate the specified host prefix. - 1 2
- Specify an IPv6 CIDR with a prefix of
112. Kubernetes uses only the lowest 16 bits. For a prefix of112, IP addresses are assigned from112to128bits.
To patch the cluster network configuration, enter the following command:
$ oc patch network.config.openshift.io cluster \ --type='json' --patch-file <file>.yamlwhere:
fileSpecifies the name of the file you created in the previous step.
Example output
network.config.openshift.io/cluster patched
Verification
Complete the following step to verify that the cluster network recognizes the IPv6 address blocks that you specified in the previous procedure.
Display the network configuration:
$ oc describe networkExample output
Status: Cluster Network: Cidr: 10.128.0.0/14 Host Prefix: 23 Cidr: fd01::/48 Host Prefix: 64 Cluster Network MTU: 1400 Network Type: OVNKubernetes Service Network: 172.30.0.0/16 fd02::/112
24.9.2. Converting to a single-stack cluster network Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can convert your dual-stack cluster network to a single-stack cluster network.
If you originally converted your IPv4 single-stack cluster network to a dual-stack cluster, you can convert only back to the IPv4 single-stack cluster and not an IPv6 single-stack cluster network. The same restriction applies for converting back to an IPv6 single-stack cluster network.
Prerequisites
-
You installed the OpenShift CLI ().
oc -
You are logged in to the cluster with a user with privileges.
cluster-admin - Your cluster uses the OVN-Kubernetes network plugin.
- The cluster nodes have IPv6 addresses.
- You have enabled dual-stack networking.
Procedure
Edit the
custom resource (CR) by running the following command:networks.config.openshift.io$ oc edit networks.config.openshift.io-
Remove the IPv4 or IPv6 configuration that you added to the and the
cidrparameters from completing the "Converting to a dual-stack cluster network " procedure steps.hostPrefix
24.10. Configuring OVN-Kubernetes internal IP address subnets Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can change the IP address ranges that the OVN-Kubernetes network plugin uses for the join and transit subnets.
24.10.1. Configuring the OVN-Kubernetes join subnet Link kopierenLink in die Zwischenablage kopiert!
You can change the join subnet used by OVN-Kubernetes to avoid conflicting with any existing subnets already in use in your environment.
Prerequisites
-
Install the OpenShift CLI ().
oc -
Log in to the cluster with a user with privileges.
cluster-admin - Ensure that the cluster uses the OVN-Kubernetes network plugin.
Procedure
To change the OVN-Kubernetes join subnet, enter the following command:
$ oc patch network.operator.openshift.io cluster --type='merge' \ -p='{"spec":{"defaultNetwork":{"ovnKubernetesConfig": {"ipv4":{"internalJoinSubnet": "<join_subnet>"}, "ipv6":{"internalJoinSubnet": "<join_subnet>"}}}}}'where:
<join_subnet>-
Specifies an IP address subnet for internal use by OVN-Kubernetes. The subnet must be larger than the number of nodes in the cluster and it must be large enough to accommodate one IP address per node in the cluster. This subnet cannot overlap with any other subnets used by OpenShift Container Platform or on the host itself. The default value for IPv4 is
100.64.0.0/16and the default value for IPv6 isfd98::/64.
Example output
network.operator.openshift.io/cluster patched
Verification
To confirm that the configuration is active, enter the following command:
$ oc get network.operator.openshift.io \ -o jsonpath="{.items[0].spec.defaultNetwork}"The command operation can take up to 30 minutes for this change to take effect.
Example output
{ "ovnKubernetesConfig": { "ipv4": { "internalJoinSubnet": "100.64.1.0/16" }, }, "type": "OVNKubernetes" }
24.10.2. Configuring the OVN-Kubernetes transit subnet Link kopierenLink in die Zwischenablage kopiert!
You can change the transit subnet used by OVN-Kubernetes to avoid conflicting with any existing subnets already in use in your environment.
Prerequisites
-
Install the OpenShift CLI ().
oc -
Log in to the cluster with a user with privileges.
cluster-admin - Ensure that the cluster uses the OVN-Kubernetes network plugin.
Procedure
To change the OVN-Kubernetes transit subnet, enter the following command:
$ oc patch network.operator.openshift.io cluster --type='merge' \ -p='{"spec":{"defaultNetwork":{"ovnKubernetesConfig": {"ipv4":{"internalTransitSwitchSubnet": "<transit_subnet>"}, "ipv6":{"internalTransitSwitchSubnet": "<transit_subnet>"}}}}}'where:
<transit_subnet>-
Specifies an IP address subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. The default value for IPv4 is
100.88.0.0/16and the default value for IPv6 isfd97::/64.
Example output
network.operator.openshift.io/cluster patched
Verification
To confirm that the configuration is active, enter the following command:
$ oc get network.operator.openshift.io \ -o jsonpath="{.items[0].spec.defaultNetwork}"It can take up to 30 minutes for this change to take effect.
Example output
{ "ovnKubernetesConfig": { "ipv4": { "internalTransitSwitchSubnet": "100.88.1.0/16" }, }, "type": "OVNKubernetes" }
24.11. Logging for egress firewall and network policy rules Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can configure audit logging for your cluster and enable logging for one or more namespaces. OpenShift Container Platform produces audit logs for both egress firewalls and network policies.
Audit logging is available for only the OVN-Kubernetes network plugin.
24.11.1. Audit logging Link kopierenLink in die Zwischenablage kopiert!
The OVN-Kubernetes network plugin uses Open Virtual Network (OVN) ACLs to manage egress firewalls and network policies. Audit logging exposes allow and deny ACL events.
You can configure the destination for audit logs, such as a syslog server or a UNIX domain socket. Regardless of any additional configuration, an audit log is always saved to
/var/log/ovn/acl-audit-log.log
You can enable audit logging for each namespace by annotating each namespace configuration with a
k8s.ovn.org/acl-logging
k8s.ovn.org/acl-logging
allow
deny
A network policy does not support setting the
Pass
The ACL-logging implementation logs access control list (ACL) events for a network. You can view these logs to analyze any potential security issues.
Example namespace annotation
kind: Namespace
apiVersion: v1
metadata:
name: example1
annotations:
k8s.ovn.org/acl-logging: |-
{
"deny": "info",
"allow": "info"
}
To view the default ACL logging configuration values, see the
policyAuditConfig
cluster-network-03-config.yml
The logging message format is compatible with syslog as defined by RFC5424. The syslog facility is configurable and defaults to
local0
Example logging message that outputs parameters and their values
<timestamp>|<message_serial>|acl_log(ovn_pinctrl0)|<severity>|name="<acl_name>", verdict="<verdict>", severity="<severity>", direction="<direction>": <flow>
Where:
-
states the time and date for the creation of a log message.
<timestamp> -
lists the serial number for a log message.
<message_serial> -
is a literal string that prints the location of the log message in the OVN-Kubernetes plugin.
acl_log(ovn_pinctrl0) -
sets the severity level for a log message. If you enable audit logging that supports
<severity>andallowtasks then two severity levels show in the log message output.deny -
states the name of the ACL-logging implementation in the OVN Network Bridging Database (
<name>) that was created by the network policy.nbdb -
can be either
<verdict>orallow.drop -
can be either
<direction>orto-lportto indicate that the policy was applied to traffic going to or away from a pod.from-lport -
shows packet information in a format equivalent to the
<flow>protocol. This parameter comprises Open vSwitch (OVS) fields.OpenFlow
The following example shows OVS fields that the
flow
Example of OVS fields used by the flow parameter to extract packet information
<proto>,vlan_tci=0x0000,dl_src=<src_mac>,dl_dst=<source_mac>,nw_src=<source_ip>,nw_dst=<target_ip>,nw_tos=<tos_dscp>,nw_ecn=<tos_ecn>,nw_ttl=<ip_ttl>,nw_frag=<fragment>,tp_src=<tcp_src_port>,tp_dst=<tcp_dst_port>,tcp_flags=<tcp_flags>
Where:
-
states the protocol. Valid values are
<proto>andtcp.udp -
states the VLAN header as
vlan_tci=0x0000because a VLAN ID is not set for internal pod network traffic.0 -
specifies the source for the Media Access Control (MAC) address.
<src_mac> -
specifies the destination for the MAC address.
<source_mac> -
lists the source IP address
<source_ip> -
lists the target IP address.
<target_ip> -
states Differentiated Services Code Point (DSCP) values to classify and prioritize certain network traffic over other traffic.
<tos_dscp> -
states Explicit Congestion Notification (ECN) values that indicate any congested traffic in your network.
<tos_ecn> -
states the Time To Live (TTP) information for an packet.
<ip_ttl> -
specifies what type of IP fragments or IP non-fragments to match.
<fragment> -
shows the source for the port for TCP and UDP protocols.
<tcp_src_port> -
lists the destination port for TCP and UDP protocols.
<tcp_dst_port> -
supports numerous flags such as
<tcp_flags>,SYN,ACKand so on. If you need to set multiple values then each value is separated by a vertical bar (PSH). The UDP protocol does not support this parameter.|
For more information about the previous field descriptions, go to the OVS manual page for
ovs-fields
Example ACL deny log entry for a network policy
2023-11-02T16:28:54.139Z|00004|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn
2023-11-02T16:28:55.187Z|00005|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn
2023-11-02T16:28:57.235Z|00006|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn
The following table describes namespace annotation values:
| Field | Description |
|---|---|
|
| Blocks namespace access to any traffic that matches an ACL rule with the
|
|
| Permits namespace access to any traffic that matches an ACL rule with the
|
|
| A
|
24.11.2. Audit configuration Link kopierenLink in die Zwischenablage kopiert!
The configuration for audit logging is specified as part of the OVN-Kubernetes cluster network provider configuration. The following YAML illustrates the default values for the audit logging:
Audit logging configuration
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
defaultNetwork:
ovnKubernetesConfig:
policyAuditConfig:
destination: "null"
maxFileSize: 50
rateLimit: 20
syslogFacility: local0
The following table describes the configuration fields for audit logging.
| Field | Type | Description |
|---|---|---|
|
| integer | The maximum number of messages to generate every second per node. The default value is
|
|
| integer | The maximum size for the audit log in bytes. The default value is
|
|
| integer | The maximum number of log files that are retained. |
|
| string | One of the following additional audit log targets:
|
|
| string | The syslog facility, such as
|
24.11.3. Configuring egress firewall and network policy auditing for a cluster Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can customize audit logging for your cluster.
Prerequisites
-
Install the OpenShift CLI ().
oc -
Log in to the cluster with a user with privileges.
cluster-admin
Procedure
To customize the audit logging configuration, enter the following command:
$ oc edit network.operator.openshift.io/clusterTipYou can alternatively customize and apply the following YAML to configure audit logging:
apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: policyAuditConfig: destination: "null" maxFileSize: 50 rateLimit: 20 syslogFacility: local0
Verification
To create a namespace with network policies complete the following steps:
Create a namespace for verification:
$ cat <<EOF| oc create -f - kind: Namespace apiVersion: v1 metadata: name: verify-audit-logging annotations: k8s.ovn.org/acl-logging: '{ "deny": "alert", "allow": "alert" }' EOFExample output
namespace/verify-audit-logging createdCreate network policies for the namespace:
$ cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-all spec: podSelector: matchLabels: policyTypes: - Ingress - Egress --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace namespace: verify-audit-logging spec: podSelector: {} policyTypes: - Ingress - Egress ingress: - from: - podSelector: {} egress: - to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: verify-audit-logging EOFExample output
networkpolicy.networking.k8s.io/deny-all created networkpolicy.networking.k8s.io/allow-from-same-namespace created
Create a pod for source traffic in the
namespace:default$ cat <<EOF| oc create -n default -f - apiVersion: v1 kind: Pod metadata: name: client spec: containers: - name: client image: registry.access.redhat.com/rhel7/rhel-tools command: ["/bin/sh", "-c"] args: ["sleep inf"] EOFCreate two pods in the
namespace:verify-audit-logging$ for name in client server; do cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: v1 kind: Pod metadata: name: ${name} spec: containers: - name: ${name} image: registry.access.redhat.com/rhel7/rhel-tools command: ["/bin/sh", "-c"] args: ["sleep inf"] EOF doneExample output
pod/client created pod/server createdTo generate traffic and produce network policy audit log entries, complete the following steps:
Obtain the IP address for pod named
in theservernamespace:verify-audit-logging$ POD_IP=$(oc get pods server -n verify-audit-logging -o jsonpath='{.status.podIP}')Ping the IP address from the previous command from the pod named
in theclientnamespace and confirm that all packets are dropped:default$ oc exec -it client -n default -- /bin/ping -c 2 $POD_IPExample output
PING 10.128.2.55 (10.128.2.55) 56(84) bytes of data. --- 10.128.2.55 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 2041msPing the IP address saved in the
shell environment variable from the pod namedPOD_IPin theclientnamespace and confirm that all packets are allowed:verify-audit-logging$ oc exec -it client -n verify-audit-logging -- /bin/ping -c 2 $POD_IPExample output
PING 10.128.0.86 (10.128.0.86) 56(84) bytes of data. 64 bytes from 10.128.0.86: icmp_seq=1 ttl=64 time=2.21 ms 64 bytes from 10.128.0.86: icmp_seq=2 ttl=64 time=0.440 ms --- 10.128.0.86 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.440/1.329/2.219/0.890 ms
Display the latest entries in the network policy audit log:
$ for pod in $(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print $1 }') ; do oc exec -it $pod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log doneExample output
2023-11-02T16:28:54.139Z|00004|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:55.187Z|00005|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:57.235Z|00006|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:49:57.909Z|00028|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Egress:0", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:57.909Z|00029|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Ingress:0", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00030|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Egress:0", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00031|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Ingress:0", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0
24.11.4. Enabling egress firewall and network policy audit logging for a namespace Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can enable audit logging for a namespace.
Prerequisites
-
Install the OpenShift CLI ().
oc -
Log in to the cluster with a user with privileges.
cluster-admin
Procedure
To enable audit logging for a namespace, enter the following command:
$ oc annotate namespace <namespace> \ k8s.ovn.org/acl-logging='{ "deny": "alert", "allow": "notice" }'where:
<namespace>- Specifies the name of the namespace.
TipYou can alternatively apply the following YAML to enable audit logging:
kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: |- { "deny": "alert", "allow": "notice" }Example output
namespace/verify-audit-logging annotated
Verification
Display the latest entries in the audit log:
$ for pod in $(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print $1 }') ; do oc exec -it $pod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log doneExample output
2023-11-02T16:49:57.909Z|00028|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Egress:0", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:57.909Z|00029|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Ingress:0", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00030|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Egress:0", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00031|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Ingress:0", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0
24.11.5. Disabling egress firewall and network policy audit logging for a namespace Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can disable audit logging for a namespace.
Prerequisites
-
Install the OpenShift CLI ().
oc -
Log in to the cluster with a user with privileges.
cluster-admin
Procedure
To disable audit logging for a namespace, enter the following command:
$ oc annotate --overwrite namespace <namespace> k8s.ovn.org/acl-logging-where:
<namespace>- Specifies the name of the namespace.
TipYou can alternatively apply the following YAML to disable audit logging:
kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: nullExample output
namespace/verify-audit-logging annotated
24.12. Configuring IPsec encryption Link kopierenLink in die Zwischenablage kopiert!
With IPsec enabled, you can encrypt both internal pod-to-pod cluster traffic between nodes and external traffic between pods and IPsec endpoints external to your cluster. All pod-to-pod network traffic between nodes on the OVN-Kubernetes cluster network is encrypted with IPsec Transport mode.
IPsec is disabled by default. It can be enabled either during or after installing the cluster. For information about cluster installation, see OpenShift Container Platform installation overview. If you need to enable IPsec after cluster installation, you must first resize your cluster MTU to account for the overhead of the IPsec ESP IP header.
IPsec on IBM Cloud® supports only NAT-T. Using ESP is not supported.
The following support limitations exist for IPsec on a OpenShift Container Platform cluster:
- You must disable IPsec before updating to OpenShift Container Platform 4.15. After disabling IPsec, you must also delete the associated IPsec daemonsets. There is a known issue that can cause interruptions in pod-to-pod communication if you update without disabling IPsec. (OCPBUGS-43323)
Use the procedures in the following documentation to:
- Enable and disable IPSec after cluster installation
- Configure support for external IPsec endpoints outside the cluster
- Verify that IPsec encrypts traffic between pods on different nodes
24.12.1. Prerequisites Link kopierenLink in die Zwischenablage kopiert!
-
You have decreased the size of the cluster MTU by bytes to allow for the additional overhead of the IPsec ESP header. For more information on resizing the MTU that your cluster uses, see Changing the MTU for the cluster network.
46
24.12.2. Network connectivity requirements when IPsec is enabled Link kopierenLink in die Zwischenablage kopiert!
You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster.
| Protocol | Port | Description |
|---|---|---|
| UDP |
| IPsec IKE packets |
|
| IPsec NAT-T packets | |
| ESP | N/A | IPsec Encapsulating Security Payload (ESP) |
24.12.3. IPsec encryption for pod-to-pod traffic Link kopierenLink in die Zwischenablage kopiert!
OpenShift Container Platform supports IPsec encryption for network traffic between pods.
24.12.3.1. Types of network traffic flows encrypted by pod-to-pod IPsec Link kopierenLink in die Zwischenablage kopiert!
With IPsec enabled, only the following network traffic flows between pods are encrypted:
- Traffic between pods on different nodes on the cluster network
- Traffic from a pod on the host network to a pod on the cluster network
The following traffic flows are not encrypted:
- Traffic between pods on the same node on the cluster network
- Traffic between pods on the host network
- Traffic from a pod on the cluster network to a pod on the host network
The encrypted and unencrypted flows are illustrated in the following diagram:
24.12.3.2. Encryption protocol and IPsec mode Link kopierenLink in die Zwischenablage kopiert!
The encrypt cipher used is
AES-GCM-16-256
16
256
The IPsec mode used is Transport mode, a mode that encrypts end-to-end communication by adding an Encapsulated Security Payload (ESP) header to the IP header of the original packet and encrypts the packet data. OpenShift Container Platform does not currently use or support IPsec Tunnel mode for pod-to-pod communication.
24.12.3.3. Security certificate generation and rotation Link kopierenLink in die Zwischenablage kopiert!
The Cluster Network Operator (CNO) generates a self-signed X.509 certificate authority (CA) that is used by IPsec for encryption. Certificate signing requests (CSRs) from each node are automatically fulfilled by the CNO.
The CA is valid for 10 years. The individual node certificates are valid for 5 years and are automatically rotated after 4 1/2 years elapse.
24.12.3.4. Enabling pod-to-pod IPsec encryption Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can enable pod-to-pod IPsec encryption after cluster installation.
Prerequisites
-
Install the OpenShift CLI ().
oc -
You are logged in to the cluster as a user with privileges.
cluster-admin -
You have reduced the size of your cluster’s maximum transmission unit (MTU) by bytes to allow for the overhead of the IPsec ESP header.
46
Procedure
To enable IPsec encryption, enter the following command:
$ oc patch networks.operator.openshift.io cluster --type=merge \ -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"ipsecConfig":{ }}}}}'
Verification
To find the names of the OVN-Kubernetes data plane pods, enter the following command:
$ oc get pods -n openshift-ovn-kubernetes -l=app=ovnkube-nodeExample output
ovnkube-node-5xqbf 8/8 Running 0 28m ovnkube-node-6mwcx 8/8 Running 0 29m ovnkube-node-ck5fr 8/8 Running 0 31m ovnkube-node-fr4ld 8/8 Running 0 26m ovnkube-node-wgs4l 8/8 Running 0 33m ovnkube-node-zfvcl 8/8 Running 0 34mVerify that IPsec is enabled on your cluster by entering the following command. The command output must state
to indicate that the node has IPsec enabled.true$ oc -n openshift-ovn-kubernetes rsh ovnkube-node-<pod_number_sequence> ovn-nbctl --no-leader-only get nb_global . ipsec1 - 1
- Replace
<pod_number_sequence>with the random sequence of letters,5xqbf, for a data plane pod from the previous step
24.12.3.5. Disabling IPsec encryption Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can disable IPsec encryption only if you enabled IPsec after cluster installation.
To avoid issues with your installed cluster, ensure that after you disable IPsec that you also delete the associated IPsec daemonsets pods.
Prerequisites
-
Install the OpenShift CLI ().
oc -
Log in to the cluster with a user with privileges.
cluster-admin
Procedure
To disable IPsec encryption, enter the following command:
$ oc patch networks.operator.openshift.io/cluster --type=json \ -p='[{"op":"remove", "path":"/spec/defaultNetwork/ovnKubernetesConfig/ipsecConfig"}]'To find the names of the OVN-Kubernetes data plane pods that exist on a node in your cluster, enter the following command:
$ oc get pods -n openshift-ovn-kubernetes -l=app=ovnkube-nodeExample output
ovnkube-node-5xqbf 8/8 Running 0 28m ovnkube-node-6mwcx 8/8 Running 0 29m ovnkube-node-ck5fr 8/8 Running 0 31m ...To check if a node in your cluster has IPsec disabled, enter the following command. Ensure that you enter this command for each node that exists in your cluster. The command output must state
to indicate that the node has IPsec disabled.false$ oc -n openshift-ovn-kubernetes rsh ovnkube-node-<pod_number_sequence> ovn-nbctl --no-leader-only get nb_global . ipsec1 - 1
- Replace
<pod_number_sequence>with the random sequence of letters,5xqbf, for a data plane pod from the previous step.
To remove the IPsec
daemonset pod from theovn-ipsec-hostnamespace on a node, enter the following command:openshift-ovn-kubernetes$ oc delete daemonset ovn-ipsec-host -n openshift-ovn-kubernetes1 - 1
- The
ovn-ipsec-hostdaemonset pod configures IPsec connections for east-west traffic on a node.
To remove the IPsec
daemonset pod from theovn-ipsec-containerizednamespace on a node, enter the following command:openshift-ovn-kubernetes$ oc delete daemonset ovn-ipsec-containerized -n openshift-ovn-kubernetes1 - 1
- The
ovn-ipsec-containerizeddaemonset pod configures IPsec connections for east-west traffic on a node.
Verify that the
andovn-ipsec-hostdaemonset pods were removed from all the nodes in your cluster by entering the following command. If the command output does not list the pods, the removal operation is successful.ovn-ipsec-containerized$ oc get pods -n openshift-ovn-kubernetes -l=app=ovn-ipsecNoteYou might need to re-run the
command for a pod because sometimes the initial command attempt might not delete the pod.oc delete-
Optional: You can increase the size of your cluster MTU by bytes because there is no longer any overhead from the IPsec ESP header in IP packets.
46
24.12.4. IPsec encryption for external traffic Link kopierenLink in die Zwischenablage kopiert!
OpenShift Container Platform supports IPsec encryption for traffic to external hosts.
You must supply a custom IPsec configuration, which includes the IPsec configuration file itself and TLS certificates.
Ensure that the following prohibitions are observed:
- The custom IPsec configuration must not include any connection specifications that might interfere with the cluster’s pod-to-pod IPsec configuration.
-
Certificate common names (CN) in the provided certificate bundle must not begin with the prefix, because this naming can collide with pod-to-pod IPsec CN names in the Network Security Services (NSS) database of each node.
ovs_
IPsec support for external endpoints is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
24.12.4.1. Enabling IPsec encryption for external IPsec endpoints Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can enable IPsec encryption between the cluster and external IPsec endpoints. Because this procedure uses Butane to create machine configs, you must have the
butane
After you apply the machine config, the Machine Config Operator reboots affected nodes in your cluster to rollout the new machine config.
Prerequisites
-
Install the OpenShift CLI ().
oc -
You are logged in to the cluster as a user with privileges.
cluster-admin -
You have reduced the size of your cluster MTU by bytes to allow for the overhead of the IPsec ESP header.
46 -
You have installed the utility.
butane - You have an existing PKCS#12 certificate for the IPsec endpoint and a CA cert in PEM format.
Procedure
As a cluster administrator, you can enable IPsec support for external IPsec endpoints.
-
Create an IPsec configuration file named . The configuration is consumed in the next step. For more information, see Libreswan as an IPsec VPN implementation.
ipsec-endpoint-config.conf Provide the following certificate files to add to the Network Security Services (NSS) database on each host. These files are imported as part of the Butane configuration in subsequent steps.
-
: The certificate bundle for the IPsec endpoints
left_server.p12 -
: The certificate authority that you signed your certificates with
ca.pem
-
Create a machine config to apply the IPsec configuration to your cluster by using the following two steps:
To add the IPsec configuration, create Butane config files for the control plane and worker nodes with the following contents:
NoteThe Butane version you specify in the config file should match the OpenShift Container Platform version and always ends in
. For example,0. See "Creating machine configs with Butane" for information about Butane.4.14.0$ for role in master worker; do cat >> "99-ipsec-${role}-endpoint-config.bu" <<-EOF variant: openshift version: 4.14.0 metadata: name: 99-${role}-import-certs-enable-svc-os-ext labels: machineconfiguration.openshift.io/role: $role openshift: extensions: - ipsec systemd: units: - name: ipsec-import.service enabled: true contents: | [Unit] Description=Import external certs into ipsec NSS Before=ipsec.service [Service] Type=oneshot ExecStart=/usr/local/bin/ipsec-addcert.sh RemainAfterExit=false StandardOutput=journal [Install] WantedBy=multi-user.target - name: ipsecenabler.service enabled: true contents: | [Service] Type=oneshot ExecStart=systemctl enable --now ipsec.service [Install] WantedBy=multi-user.target storage: files: - path: /etc/ipsec.d/ipsec-endpoint-config.conf mode: 0400 overwrite: true contents: local: ipsec-endpoint-config.conf - path: /etc/pki/certs/ca.pem mode: 0400 overwrite: true contents: local: ca.pem - path: /etc/pki/certs/left_server.p12 mode: 0400 overwrite: true contents: local: left_server.p12 - path: /usr/local/bin/ipsec-addcert.sh mode: 0740 overwrite: true contents: inline: | #!/bin/bash -e echo "importing cert to NSS" certutil -A -n "CA" -t "CT,C,C" -d /var/lib/ipsec/nss/ -i /etc/pki/certs/ca.pem pk12util -W "" -i /etc/pki/certs/left_server.p12 -d /var/lib/ipsec/nss/ certutil -M -n "left_server" -t "u,u,u" -d /var/lib/ipsec/nss/ EOF doneTo transform the Butane files that you created in the previous step into machine configs, enter the following command:
$ for role in master worker; do butane -d . 99-ipsec-${role}-endpoint-config.bu -o ./99-ipsec-$role-endpoint-config.yaml done
To apply the machine configs to your cluster, enter the following command:
$ for role in master worker; do oc apply -f 99-ipsec-${role}-endpoint-config.yaml doneImportantAs the Machine Config Operator (MCO) updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated before external IPsec connectivity is available.
Check the machine config pool status by entering the following command:
$ oc get mcpA successfully updated node has the following status:
,UPDATED=true,UPDATING=false.DEGRADED=falseNoteBy default, the MCO updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster.
24.12.5. Additional resources Link kopierenLink in die Zwischenablage kopiert!
24.13. Configure an external gateway on the default network Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can configure an external gateway on the default network.
This feature offers the following benefits:
- Granular control over egress traffic on a per-namespace basis
- Flexible configuration of static and dynamic external gateway IP addresses
- Support for both IPv4 and IPv6 address families
24.13.1. Prerequisites Link kopierenLink in die Zwischenablage kopiert!
- Your cluster uses the OVN-Kubernetes network plugin.
- Your infrastructure is configured to route traffic from the secondary external gateway.
24.13.2. How OpenShift Container Platform determines the external gateway IP address Link kopierenLink in die Zwischenablage kopiert!
You configure a secondary external gateway with the
AdminPolicyBasedExternalRoute
k8s.ovn.org
Each namespace that a
AdminPolicyBasedExternalRoute
AdminPolicyBasedExternalRoute
Changes to policies are isolated in the controller. If a policy fails to apply, changes to other policies do not trigger a retry of other policies. Policies are only re-evaluated, applying any differences that might have occurred by the change, when updates to the policy itself or related objects to the policy such as target namespaces, pod gateways, or namespaces hosting them from dynamic hops are made.
- Static assignment
- You specify an IP address directly.
- Dynamic assignment
You specify an IP address indirectly, with namespace and pod selectors, and an optional network attachment definition.
- If the name of a network attachment definition is provided, the external gateway IP address of the network attachment is used.
-
If the name of a network attachment definition is not provided, the external gateway IP address for the pod itself is used. However, this approach works only if the pod is configured with set to
hostNetwork.true
24.13.3. AdminPolicyBasedExternalRoute object configuration Link kopierenLink in die Zwischenablage kopiert!
You can define an
AdminPolicyBasedExternalRoute
AdminPolicyBasedExternalRoute
| Field | Type | Description |
|---|---|---|
|
|
| Specifies the name of the
|
|
|
| Specifies a namespace selector that the routing polices apply to. Only
A namespace can only be targeted by one
|
|
|
| Specifies the destinations where the packets are forwarded to. Must be either or both of
|
| Field | Type | Description |
|---|---|---|
|
|
| Specifies an array of static IP addresses. |
|
|
| Specifies an array of pod selectors corresponding to pods configured with a network attachment definition to use as the external gateway target. |
| Field | Type | Description |
|---|---|---|
|
|
| Specifies either an IPv4 or IPv6 address of the next destination hop. |
|
|
| Optional: Specifies whether Bi-Directional Forwarding Detection (BFD) is supported by the network. The default value is
|
| Field | Type | Description |
|---|---|---|
|
|
| Specifies a [set-based](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#set-based-requirement) label selector to filter the pods in the namespace that match this network configuration. |
|
|
| Specifies a
|
|
|
| Optional: Specifies whether Bi-Directional Forwarding Detection (BFD) is supported by the network. The default value is
|
|
|
| Optional: Specifies the name of a network attachment definition. The name must match the list of logical networks associated with the pod. If this field is not specified, the host network of the pod is used. However, the pod must be configure as a host network pod to use the host network. |
24.13.3.1. Example secondary external gateway configurations Link kopierenLink in die Zwischenablage kopiert!
In the following example, the
AdminPolicyBasedExternalRoute
kubernetes.io/metadata.name: novxlan-externalgw-ecmp-4059
apiVersion: k8s.ovn.org/v1
kind: AdminPolicyBasedExternalRoute
metadata:
name: default-route-policy
spec:
from:
namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: novxlan-externalgw-ecmp-4059
nextHops:
static:
- ip: "172.18.0.8"
- ip: "172.18.0.9"
In the following example, the
AdminPolicyBasedExternalRoute
apiVersion: k8s.ovn.org/v1
kind: AdminPolicyBasedExternalRoute
metadata:
name: shadow-traffic-policy
spec:
from:
namespaceSelector:
matchLabels:
externalTraffic: ""
nextHops:
dynamic:
- podSelector:
matchLabels:
gatewayPod: ""
namespaceSelector:
matchLabels:
shadowTraffic: ""
networkAttachmentName: shadow-gateway
- podSelector:
matchLabels:
gigabyteGW: ""
namespaceSelector:
matchLabels:
gatewayNamespace: ""
networkAttachmentName: gateway
In the following example, the
AdminPolicyBasedExternalRoute
apiVersion: k8s.ovn.org/v1
kind: AdminPolicyBasedExternalRoute
metadata:
name: multi-hop-policy
spec:
from:
namespaceSelector:
matchLabels:
trafficType: "egress"
nextHops:
static:
- ip: "172.18.0.8"
- ip: "172.18.0.9"
dynamic:
- podSelector:
matchLabels:
gatewayPod: ""
namespaceSelector:
matchLabels:
egressTraffic: ""
networkAttachmentName: gigabyte
24.13.4. Configure a secondary external gateway Link kopierenLink in die Zwischenablage kopiert!
You can configure an external gateway on the default network for a namespace in your cluster.
Prerequisites
-
You installed the OpenShift CLI ().
oc -
You are logged in to the cluster with a user with privileges.
cluster-admin
Procedure
-
Create a YAML file that contains an object.
AdminPolicyBasedExternalRoute To create an admin policy based external route, enter the following command:
$ oc create -f <file>.yamlwhere:
<file>- Specifies the name of the YAML file that you created in the previous step.
Example output
adminpolicybasedexternalroute.k8s.ovn.org/default-route-policy createdTo confirm that the admin policy based external route was created, enter the following command:
$ oc describe apbexternalroute <name> | tail -n 6where:
<name>-
Specifies the name of the
AdminPolicyBasedExternalRouteobject.
Example output
Status: Last Transition Time: 2023-04-24T15:09:01Z Messages: Configured external gateway IPs: 172.18.0.8 Status: Success Events: <none>
24.13.5. Additional resources Link kopierenLink in die Zwischenablage kopiert!
- For more information about additional network attachments, see Understanding multiple networks
24.14. Configuring an egress firewall for a project Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can create an egress firewall for a project that restricts egress traffic leaving your OpenShift Container Platform cluster.
24.14.1. How an egress firewall works in a project Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can use an egress firewall to limit the external hosts that some or all pods can access from within the cluster. An egress firewall supports the following scenarios:
- A pod can only connect to internal hosts and cannot initiate connections to the public internet.
- A pod can only connect to the public internet and cannot initiate connections to internal hosts that are outside the OpenShift Container Platform cluster.
- A pod cannot reach specified internal subnets or hosts outside the OpenShift Container Platform cluster.
- A pod can only connect to specific external hosts.
For example, you can allow one project access to a specified IP range but deny the same access to a different project. Or, you can restrict application developers from updating from Python pip mirrors, and force updates to come only from approved sources.
You configure an egress firewall policy by creating an
EgressFirewall
- An IP address range in CIDR format
- A DNS name that resolves to an IP address
- A port number
- A protocol that is one of the following protocols: TCP, UDP, and SCTP
24.14.1.1. Limitations of an egress firewall Link kopierenLink in die Zwischenablage kopiert!
An egress firewall has the following limitations:
-
No project can have more than one CR.
EgressFirewall -
Egress firewall rules do not apply to traffic that goes through routers. Any user with permission to create a CR object can bypass egress firewall policy rules by creating a route that points to a forbidden destination.
Route - Egress firewall does not apply to the host network namespace. Pods with host networking enabled are unaffected by egress firewall rules.
If your egress firewall includes a deny rule for
, access to your OpenShift Container Platform API servers is blocked. You must either add allow rules for each IP address or use the0.0.0.0/0type allow rule in your egress policy rules to connect to API servers.nodeSelectorThe following example illustrates the order of the egress firewall rules necessary to ensure API server access:
EgressFirewallAPI server access exampleapiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default namespace: <namespace>1 spec: egress: - to: cidrSelector: <api_server_address_range>2 type: Allow # ... - to: cidrSelector: 0.0.0.0/03 type: Denywhere:
- <namespace>
- Specifies the namespace for the egress firewall.
- <api_server_address_range>
- Specifies the IP address range that includes your OpenShift Container Platform API servers.
- <cidrSelector>
Specifies a value of
to set a global deny rule that prevents access to the OpenShift Container Platform API servers.0.0.0.0/0To find the IP address for your API servers, run
.oc get ep kubernetes -n defaultFor more information, see BZ#1988324.
-
A maximum of one object with a maximum of 8,000 rules can be defined per project.
EgressFirewall - If you are using the OVN-Kubernetes network plugin with shared gateway mode in Red Hat OpenShift Networking, return ingress replies are affected by egress firewall rules. If the egress firewall rules drop the ingress reply destination IP, the traffic is dropped.
- In general, using Domain Name Server (DNS) names in your egress firewall policy does not affect local DNS resolution through CoreDNS. However, if your egress firewall policy uses domain names and an external DNS server handles DNS resolution for an affected pod, you must include egress firewall rules that permit access to the IP addresses of your DNS server.
Violating any of these restrictions results in a broken egress firewall for the project. Consequently, all external network traffic is dropped, which can cause security risks for your organization.
An
EgressFirewall
kube-node-lease
kube-public
kube-system
openshift
openshift-
24.14.1.2. Matching order for egress firewall policy rules Link kopierenLink in die Zwischenablage kopiert!
OVN-Kubernetes evaluates egress firewall policy rules in the order they are defined in, from first to last. The first rule that matches an egress connection from a pod applies. Any subsequent rules are ignored for that connection.
24.14.1.3. How Domain Name Server (DNS) resolution works Link kopierenLink in die Zwischenablage kopiert!
If you use DNS names in any of your egress firewall policy rules, proper resolution of the domain names is subject to the following restrictions:
- Domain name updates are polled based on a time-to-live (TTL) duration. By default, the duration is 30 minutes. When the egress firewall controller queries the local name servers for a domain name, if the response includes a TTL and the TTL is less than 30 minutes, the controller sets the duration for that DNS name to the returned value. Each DNS name is queried after the TTL for the DNS record expires.
- The pod must resolve the domain from the same local name servers when necessary. Otherwise the IP addresses for the domain known by the egress firewall controller and the pod can be different. If the IP addresses for a hostname differ, the egress firewall might not be enforced consistently.
-
Because the egress firewall controller and pods asynchronously poll the same local name server, the pod might obtain the updated IP address before the egress controller does, which causes a race condition. Due to this current limitation, domain name usage in objects is only recommended for domains with infrequent IP address changes.
EgressFirewall
24.14.1.3.1. Improved DNS resolution and resolving wildcard domain names Link kopierenLink in die Zwischenablage kopiert!
There might be situations where the IP addresses associated with a DNS record change frequently, or you might want to specify wildcard domain names in your egress firewall policy rules.
In this situation, the OVN-Kubernetes cluster manager creates a
DNSNameResolver
Improved DNS resolution for egress firewall rules is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Example DNSNameResolver CR definition
apiVersion: networking.openshift.io/v1alpha1
kind: DNSNameResolver
spec:
name: www.example.com.
status:
resolvedNames:
- dnsName: www.example.com.
resolvedAddress:
- ip: "1.2.3.4"
ttlSeconds: 60
lastLookupTime: "2023-08-08T15:07:04Z"
where:
- <name>
- Specifies the DNS name. This can be either a standard DNS name or a wildcard DNS name. For a wildcard DNS name, the DNS name resolution information contains all of the DNS names that match the wildcard DNS name.
- <dnsName>
-
Specifies the resolved DNS name matching the
spec.namefield. If thespec.namefield contains a wildcard DNS name, then multiplednsNameentries are created that contain the standard DNS names that match the wildcard DNS name when resolved. If the wildcard DNS name can also be successfully resolved, then this field also stores the wildcard DNS name. <ip> Specifies the current IP addresses associated with the DNS name. - <ttlSeconds>
- Specifies the last time-to-live (TTL) duration.
- <lastLookupTime>
- Specifies the last lookup time.
If during DNS resolution the DNS name in the query matches any name defined in a
DNSNameResolver
status
The OVN-Kubernetes cluster manager watches for updates to an
EgressFirewall
DNSNameResolver
Do not modify
DNSNameResolver
24.14.2. EgressFirewall custom resource (CR) Link kopierenLink in die Zwischenablage kopiert!
You can define one or more rules for an egress firewall. A rule is either an
Allow
Deny
The following YAML describes an
EgressFirewall
EgressFirewall object
apiVersion: k8s.ovn.org/v1
kind: EgressFirewall
metadata:
name: <ovn>
spec:
egress: <egress_rules>
...
where:
- <ovn>
-
The name for the object must be
default. - <egress_rules>
- Specifies a collection of one or more egress network policy rules as described in the following section.
24.14.2.1. EgressFirewall rules Link kopierenLink in die Zwischenablage kopiert!
The following YAML describes the rules for an
EgressFirewall
nodeSelector
egress
Egress policy rule stanza
egress:
- type: <type>
to:
cidrSelector: <cidr_range>
dnsName: <dns_name>
nodeSelector: <label_name>: <label_value>
ports: <optional_port>
...
where:
- <type>
-
Specifies the type of rule. The value must be either
AlloworDeny. - <to>
-
Specifies a stanza describing an egress traffic match rule that specifies the
cidrSelectorfield or thednsNamefield. You cannot use both fields in the same rule. - <cidr_range>
- Specifies an IP address range in CIDR format.
- <dns_name>
- Specifies a DNS domain name.
- <nodeSelector>
-
Specifies labels which are key and value pairs that the user defines. Labels are attached to objects, such as pods. The
nodeSelectorallows for one or more node labels to be selected and attached to pods. - <ports>
- Specifies an optional field that describes a collection of network ports and protocols for the rule.
Ports stanza
ports:
- port:
protocol:
where:
- <port>
-
Specifies a network port, such as
80or443. If you specify a value for this field, you must also specify a value for theprotocolfield. - <protocol>
-
Specifies a network protocol. The value must be either
TCP,UDP, orSCTP.
24.14.2.2. Example EgressFirewall CR Link kopierenLink in die Zwischenablage kopiert!
The following example defines several egress firewall policy rules:
apiVersion: k8s.ovn.org/v1
kind: EgressFirewall
metadata:
name: default
spec:
egress:
- type: Allow
to:
cidrSelector: 1.2.3.0/24
- type: Deny
to:
cidrSelector: 0.0.0.0/0
where:
- <egress>
- Specifies a collection of egress firewall policy rule objects.
The following example defines a policy rule that denies traffic to the host at the
172.16.1.1/32
80
443
apiVersion: k8s.ovn.org/v1
kind: EgressFirewall
metadata:
name: default
spec:
egress:
- type: Deny
to:
cidrSelector: 172.16.1.1/32
ports:
- port: 80
protocol: TCP
- port: 443
24.14.2.3. Example EgressFirewall CR using nodeSelector Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can allow or deny egress traffic to nodes in your cluster by specifying a label using
nodeSelector
region=east
apiVersion: k8s.ovn.org/v1
kind: EgressFirewall
metadata:
name: default
spec:
egress:
- to:
nodeSelector:
matchLabels:
region: east
type: Allow
24.14.3. Creating an EgressFirewall custom resource (CR) Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can create an egress firewall policy object for a project.
If the project already has an
EgressFirewall
Prerequisites
- A cluster that uses the OVN-Kubernetes network plugin.
-
Install the OpenShift CLI ().
oc - You must log in to the cluster as a cluster administrator.
Procedure
Create a policy rule:
-
Create a file where
<policy_name>.yamldescribes the egress policy rules.<policy_name> -
Define the object in the file.
EgressFirewall
-
Create a
Create the policy object by entering the following command. Replace
with the name of the policy and<policy_name>with the project that the rule applies to.<project>$ oc create -f <policy_name>.yaml -n <project>Successful output lists the
name and theegressfirewall.k8s.ovn.org/v1status.created-
Optional: Save the file so that you can make changes later.
<policy_name>.yaml
24.15. Viewing an egress firewall for a project Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can list the names of any existing egress firewalls and view the traffic rules for a specific egress firewall.
24.15.1. Viewing an EgressFirewall custom resource (CR) Link kopierenLink in die Zwischenablage kopiert!
You can view an
EgressFirewall
Prerequisites
- A cluster using the OVN-Kubernetes network plugin.
-
Install the OpenShift Command-line Interface (CLI), commonly known as .
oc - You must log in to the cluster.
Procedure
Optional: To view the names of the
CR defined in your cluster, enter the following command:EgressFirewall$ oc get egressfirewall --all-namespacesTo inspect a policy, enter the following command. Replace
with the name of the policy to inspect.<policy_name>$ oc describe egressfirewall <policy_name>Example output
Name: default Namespace: project1 Created: 20 minutes ago Labels: <none> Annotations: <none> Rule: Allow to 1.2.3.0/24 Rule: Allow to www.example.com Rule: Deny to 0.0.0.0/0
24.16. Editing an egress firewall for a project Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can modify network traffic rules for an existing egress firewall.
24.16.1. Editing an EgressFirewall custom resource (CR) Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can update the egress firewall for a project.
Prerequisites
- A cluster using the OVN-Kubernetes network plugin.
-
Install the OpenShift CLI ().
oc - You must log in to the cluster as a cluster administrator.
Procedure
Find the name of the
CR for the project. ReplaceEgressFirewallwith the name of the project.<project>$ oc get -n <project> egressfirewallOptional: If you did not save a copy of the
object when you created the egress network firewall, enter the following command to create a copy.EgressFirewall$ oc get -n <project> egressfirewall <name> -o yaml > <filename>.yamlReplace
with the name of the project. Replace<project>with the name of the object. Replace<name>with the name of the file to save the YAML to.<filename>After making changes to the policy rules, enter the following command to replace the
CR. ReplaceEgressFirewallwith the name of the file containing the updated<filename>CR.EgressFirewall$ oc replace -f <filename>.yaml
24.17. Removing an egress firewall from a project Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can remove an egress firewall from a project to remove all restrictions on network traffic from the project that leaves the OpenShift Container Platform cluster.
24.17.1. Removing an EgressFirewall CR Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can remove an egress firewall from a project.
Prerequisites
- A cluster using the OVN-Kubernetes network plugin.
-
Install the OpenShift CLI ().
oc - You must log in to the cluster as a cluster administrator.
Procedure
Find the name of the
CR for the project. ReplaceEgressFirewallwith the name of the project.<project>$ oc get egressfirewall -n <project>Delete the
CR by entering the following command. ReplaceEgressFirewallwith the name of the project and<project>with the name of the object.<name>$ oc delete -n <project> egressfirewall <name>
24.18. Configuring an egress IP address Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can configure the OVN-Kubernetes Container Network Interface (CNI) network plugin to assign one or more egress IP addresses to a namespace, or to specific pods in a namespace.
Configuring egress IPs is not supported for ROSA with HCP clusters at this time.
24.18.1. Egress IP address architectural design and implementation Link kopierenLink in die Zwischenablage kopiert!
By using the OpenShift Container Platform egress IP address functionality, you can ensure that the traffic from one or more pods in one or more namespaces has a consistent source IP address for services outside the cluster network.
For example, you might have a pod that periodically queries a database that is hosted on a server outside of your cluster. To enforce access requirements for the server, a packet filtering device is configured to allow traffic only from specific IP addresses. To ensure that you can reliably allow access to the server from only that specific pod, you can configure a specific egress IP address for the pod that makes the requests to the server.
An egress IP address assigned to a namespace is different from an egress router, which is used to send traffic to specific destinations.
In some cluster configurations, application pods and ingress router pods run on the same node. If you configure an egress IP address for an application project in this scenario, the IP address is not used when you send a request to a route from the application project.
Egress IP addresses must not be configured in any Linux network configuration files, such as
ifcfg-eth0
24.18.1.1. Platform support Link kopierenLink in die Zwischenablage kopiert!
The Egress IP address feature that runs on a primary host network is supported on the following platforms:
| Platform | Supported |
|---|---|
| Bare metal | Yes |
| VMware vSphere | Yes |
| Red Hat OpenStack Platform (RHOSP) | Yes |
| Amazon Web Services (AWS) | Yes |
| Google Cloud | Yes |
| Microsoft Azure | Yes |
| IBM Z® and IBM® LinuxONE | Yes |
| IBM Z® and IBM® LinuxONE for Red Hat Enterprise Linux (RHEL) KVM | Yes |
| IBM Power® | Yes |
| Nutanix | Yes |
The Egress IP address feature that runs on secondary host networks is supported on the following platform:
| Platform | Supported |
|---|---|
| Bare metal | Yes |
The assignment of egress IP addresses to control plane nodes with the EgressIP feature is not supported on a cluster provisioned on Amazon Web Services (AWS). (BZ#2039656).
24.18.1.2. Public cloud platform considerations Link kopierenLink in die Zwischenablage kopiert!
Typically, public cloud providers place a limit on egress IPs. This means that there is a constraint on the absolute number of assignable IP addresses per node for clusters provisioned on public cloud infrastructure. The maximum number of assignable IP addresses per node, or the IP capacity, can be described in the following formula:
IP capacity = public cloud default capacity - sum(current IP assignments)
While the Egress IPs capability manages the IP address capacity per node, it is important to plan for this constraint in your deployments. For example, if a public cloud provider limits IP address capacity to 10 IP addresses per node, and you have 8 nodes, the total number of assignable IP addresses is only 80. To achieve a higher IP address capacity, you would need to allocate additional nodes. For example, if you needed 150 assignable IP addresses, you would need to allocate 7 additional nodes.
To confirm the IP capacity and subnets for any node in your public cloud environment, you can enter the
oc get node <node_name> -o yaml
cloud.network.openshift.io/egress-ipconfig
The annotation value is an array with a single object with fields that provide the following information for the primary network interface:
-
: Specifies the interface ID on AWS and Azure and the interface name on Google Cloud.
interface -
: Specifies the subnet mask for one or both IP address families.
ifaddr -
: Specifies the IP address capacity for the node. On AWS, the IP address capacity is provided per IP address family. On Azure and Google Cloud, the IP address capacity includes both IPv4 and IPv6 addresses.
capacity
Automatic attachment and detachment of egress IP addresses for traffic between nodes are available. This allows for traffic from many pods in namespaces to have a consistent source IP address to locations outside of the cluster. This also supports OpenShift SDN and OVN-Kubernetes, which is the default networking plugin in Red Hat OpenShift Networking in OpenShift Container Platform 4.14.
The RHOSP egress IP address feature creates a Neutron reservation port called
egressip-<IP address>
When an RHOSP cluster administrator assigns a floating IP to the reservation port, OpenShift Container Platform cannot delete the reservation port. The
CloudPrivateIPConfig
The following examples illustrate the annotation from nodes on several public cloud providers. The annotations are indented for readability.
Example cloud.network.openshift.io/egress-ipconfig annotation on AWS
cloud.network.openshift.io/egress-ipconfig: [
{
"interface":"eni-078d267045138e436",
"ifaddr":{"ipv4":"10.0.128.0/18"},
"capacity":{"ipv4":14,"ipv6":15}
}
]
Example cloud.network.openshift.io/egress-ipconfig annotation on Google Cloud
cloud.network.openshift.io/egress-ipconfig: [
{
"interface":"nic0",
"ifaddr":{"ipv4":"10.0.128.0/18"},
"capacity":{"ip":14}
}
]
The following sections describe the IP address capacity for supported public cloud environments for use in your capacity calculation.
24.18.1.2.1. Amazon Web Services (AWS) IP address capacity limits Link kopierenLink in die Zwischenablage kopiert!
On AWS, constraints on IP address assignments depend on the instance type configured. For more information, see IP addresses per network interface per instance type
24.18.1.2.2. Google Cloud IP address capacity limits Link kopierenLink in die Zwischenablage kopiert!
On Google Cloud, the networking model implements additional node IP addresses through IP address aliasing, rather than IP address assignments. However, IP address capacity maps directly to IP aliasing capacity.
The following capacity limits exist for IP aliasing assignment:
- Per node, the maximum number of IP aliases, both IPv4 and IPv6, is 100.
- Per VPC, the maximum number of IP aliases is unspecified, but OpenShift Container Platform scalability testing reveals the maximum to be approximately 15,000.
For more information, see Per instance quotas and Alias IP ranges overview.
24.18.1.2.3. Microsoft Azure IP address capacity limits Link kopierenLink in die Zwischenablage kopiert!
On Azure, the following capacity limits exist for IP address assignment:
- Per NIC, the maximum number of assignable IP addresses, for both IPv4 and IPv6, is 256.
- Per virtual network, the maximum number of assigned IP addresses cannot exceed 65,536.
For more information, see Networking limits.
24.18.1.3. Considerations for using an egress IP on additional network interfaces Link kopierenLink in die Zwischenablage kopiert!
In OpenShift Container Platform, egress IPs provide administrators a way to control network traffic. Egress IPs can be used with a
br-ex
You can inspect your network interface type by running the following command:
$ ip -details link show
The primary network interface is assigned a node IP address which also contains a subnet mask. Information for this node IP address can be retrieved from the Kubernetes node object for each node within your cluster by inspecting the
k8s.ovn.org/node-primary-ifaddr
"k8s.ovn.org/node-primary-ifaddr: {"ipv4":"192.168.111.23/24"}"
If the egress IP is not within the subnet of the primary network interface subnet, you can use an egress IP on another Linux network interface that is not of the primary network interface type. By doing so, OpenShift Container Platform administrators are provided with a greater level of control over networking aspects such as routing, addressing, segmentation, and security policies. This feature provides users with the option to route workload traffic over specific network interfaces for purposes such as traffic segmentation or meeting specialized requirements.
If the egress IP is not within the subnet of the primary network interface, then the selection of another network interface for egress traffic might occur if they are present on a node.
You can determine which other network interfaces might support egress IPs by inspecting the
k8s.ovn.org/host-cidrs
OVN-Kubernetes provides a mechanism to control and direct outbound network traffic from specific namespaces and pods. This ensures that it exits the cluster through a particular network interface and with a specific egress IP address.
24.18.1.3.1. Requirements for assigning an egress IP to a network interface that is not the primary network interface Link kopierenLink in die Zwischenablage kopiert!
For users who want an egress IP and traffic to be routed over a particular interface that is not the primary network interface, the following conditions must be met:
- OpenShift Container Platform is installed on a bare-metal cluster. This feature is disabled within a cloud or a hypervisor environment.
- Your OpenShift Container Platform pods are not configured as host-networked.
- If a network interface is removed or if the IP address and subnet mask which allows the egress IP to be hosted on the interface is removed, the egress IP is reconfigured. Consequently, the egress IP could be assigned to another node and interface.
- If you use an Egress IP address on a secondary network interface card (NIC), you must use the Node Tuning Operator to enable IP forwarding on the secondary NIC.
24.18.1.4. Assignment of egress IPs to pods Link kopierenLink in die Zwischenablage kopiert!
To assign one or more egress IPs to a namespace or specific pods in a namespace, the following conditions must be satisfied:
-
At least one node in your cluster must have the label.
k8s.ovn.org/egress-assignable: "" -
An object exists that defines one or more egress IP addresses to use as the source IP address for traffic leaving the cluster from pods in a namespace.
EgressIP
If you create
EgressIP
k8s.ovn.org/egress-assignable: ""
To ensure that egress IP addresses are widely distributed across nodes in the cluster, always apply the label to the nodes you intent to host the egress IP addresses before creating any
EgressIP
24.18.1.5. Assignment of egress IPs to nodes Link kopierenLink in die Zwischenablage kopiert!
When creating an
EgressIP
k8s.ovn.org/egress-assignable: ""
- An egress IP address is never assigned to more than one node at a time.
- An egress IP address is equally balanced between available nodes that can host the egress IP address.
If the
array in anspec.EgressIPsobject specifies more than one IP address, the following conditions apply:EgressIP- No node will ever host more than one of the specified IP addresses.
- Traffic is balanced roughly equally between the specified IP addresses for a given namespace.
- If a node becomes unavailable, any egress IP addresses assigned to it are automatically reassigned, subject to the previously described conditions.
When a pod matches the selector for multiple
EgressIP
EgressIP
Additionally, if an
EgressIP
EgressIP
10.10.20.1
10.10.20.2
24.18.1.6. Architectural diagram of an egress IP address configuration Link kopierenLink in die Zwischenablage kopiert!
The following diagram depicts an egress IP address configuration. The diagram describes four pods in two different namespaces running on three nodes in a cluster. The nodes are assigned IP addresses from the
192.168.126.0/18
Both Node 1 and Node 3 are labeled with
k8s.ovn.org/egress-assignable: ""
The dashed lines in the diagram depict the traffic flow from pod1, pod2, and pod3 traveling through the pod network to egress the cluster from Node 1 and Node 3. When an external service receives traffic from any of the pods selected by the example
EgressIP
192.168.126.10
192.168.126.102
The following resources from the diagram are illustrated in detail:
NamespaceobjectsThe namespaces are defined in the following manifest:
Namespace objects
apiVersion: v1 kind: Namespace metadata: name: namespace1 labels: env: prod --- apiVersion: v1 kind: Namespace metadata: name: namespace2 labels: env: prodEgressIPobjectThe following
object describes a configuration that selects all pods in any namespace with theEgressIPlabel set toenv. The egress IP addresses for the selected pods areprodand192.168.126.10.192.168.126.102EgressIPobjectapiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egressips-prod spec: egressIPs: - 192.168.126.10 - 192.168.126.102 namespaceSelector: matchLabels: env: prod status: items: - node: node1 egressIP: 192.168.126.10 - node: node3 egressIP: 192.168.126.102For the configuration in the previous example, OpenShift Container Platform assigns both egress IP addresses to the available nodes. The
field reflects whether and where the egress IP addresses are assigned.status
24.18.2. EgressIP object Link kopierenLink in die Zwischenablage kopiert!
The following YAML describes the API for the
EgressIP
EgressIP selected pods cannot serve as backends for services with
externalTrafficPolicy
Local
apiVersion: k8s.ovn.org/v1
kind: EgressIP
metadata:
name: <name>
spec:
egressIPs:
- <ip_address>
namespaceSelector:
...
podSelector:
...
- 1 1 1
- The name for the
EgressIPsobject. - 2 2
- An array of one or more IP addresses.
- 3 3
- One or more selectors for the namespaces to associate the egress IP addresses with.
- 4
- Optional: One or more selectors for pods in the specified namespaces to associate egress IP addresses with. Applying these selectors allows for the selection of a subset of pods within a namespace.
The following YAML describes the stanza for the namespace selector:
Namespace selector stanza
namespaceSelector:
matchLabels:
<label_name>: <label_value>
- 1
- One or more matching rules for namespaces. If more than one match rule is provided, all matching namespaces are selected.
The following YAML describes the optional stanza for the pod selector:
Pod selector stanza
podSelector:
matchLabels:
<label_name>: <label_value>
- 1
- Optional: One or more matching rules for pods in the namespaces that match the specified
namespaceSelectorrules. If specified, only pods that match are selected. Others pods in the namespace are not selected.
In the following example, the
EgressIP
192.168.126.11
192.168.126.102
app
web
env
prod
Example EgressIP object
apiVersion: k8s.ovn.org/v1
kind: EgressIP
metadata:
name: egress-group1
spec:
egressIPs:
- 192.168.126.11
- 192.168.126.102
podSelector:
matchLabels:
app: web
namespaceSelector:
matchLabels:
env: prod
In the following example, the
EgressIP
192.168.127.30
192.168.127.40
environment
development
Example EgressIP object
apiVersion: k8s.ovn.org/v1
kind: EgressIP
metadata:
name: egress-group2
spec:
egressIPs:
- 192.168.127.30
- 192.168.127.40
namespaceSelector:
matchExpressions:
- key: environment
operator: NotIn
values:
- development
24.18.3. The egressIPConfig object Link kopierenLink in die Zwischenablage kopiert!
As a feature of egress IP, the
reachabilityTotalTimeoutSeconds
You can set a value for the
reachabilityTotalTimeoutSeconds
egressIPConfig
If you omit the
reachabilityTotalTimeoutSeconds
egressIPConfig
1
0
The following
egressIPConfig
reachabilityTotalTimeoutSeconds
1
5
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
defaultNetwork:
ovnKubernetesConfig:
egressIPConfig:
reachabilityTotalTimeoutSeconds: 5
gatewayConfig:
routingViaHost: false
genevePort: 6081
- 1
- The
egressIPConfigholds the configurations for the options of theEgressIPobject. By changing these configurations, you can extend theEgressIPobject. - 2
- The value for
reachabilityTotalTimeoutSecondsaccepts integer values from0to60. A value of0disables the reachability check of the egressIP node. Setting a value from1to60corresponds to the timeout in seconds for a probe to send the reachability check to the node.
24.18.4. Labeling a node to host egress IP addresses Link kopierenLink in die Zwischenablage kopiert!
You can apply the
k8s.ovn.org/egress-assignable=""
Prerequisites
-
Install the OpenShift CLI ().
oc - Log in to the cluster as a cluster administrator.
Procedure
To label a node so that it can host one or more egress IP addresses, enter the following command:
$ oc label nodes <node_name> k8s.ovn.org/egress-assignable=""1 - 1
- The name of the node to label.
TipYou can alternatively apply the following YAML to add the label to a node:
apiVersion: v1 kind: Node metadata: labels: k8s.ovn.org/egress-assignable: "" name: <node_name>
24.18.5. Next steps Link kopierenLink in die Zwischenablage kopiert!
24.19. Assigning an egress IP address Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can assign an egress IP address for traffic leaving the cluster from a namespace or from specific pods in a namespace.
24.19.1. Assigning an egress IP address to a namespace Link kopierenLink in die Zwischenablage kopiert!
You can assign one or more egress IP addresses to a namespace or to specific pods in a namespace.
Prerequisites
-
Install the OpenShift CLI ().
oc - Log in to the cluster as a cluster administrator.
- Configure at least one node to host an egress IP address.
Procedure
Create an
object:EgressIP-
Create a file where
<egressips_name>.yamlis the name of the object.<egressips_name> In the file that you created, define an
object, as in the following example:EgressIPapiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-project1 spec: egressIPs: - 192.168.127.10 - 192.168.127.11 namespaceSelector: matchLabels: env: qa
-
Create a
To create the object, enter the following command.
$ oc apply -f <egressips_name>.yaml1 - 1
- Replace
<egressips_name>with the name of the object.
Example output
egressips.k8s.ovn.org/<egressips_name> created-
Optional: Store the file so that you can make changes later.
<egressips_name>.yaml Add labels to the namespace that requires egress IP addresses. To add a label to the namespace of an
object defined in step 1, run the following command:EgressIP$ oc label ns <namespace> env=qa1 - 1
- Replace
<namespace>with the namespace that requires egress IP addresses.
Verification
To show all egress IPs that are in use in your cluster, enter the following command:
$ oc get egressip -o yamlNoteThe command
only returns one egress IP address regardless of how many are configured. This is not a bug and is a limitation of Kubernetes. As a workaround, you can pass in theoc get egressipor-o yamlflags to return all egress IPs addresses in use.-o jsonExample output
# ... spec: egressIPs: - 192.168.127.10 - 192.168.127.11 # ...
24.20. Configuring an egress service Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can configure egress traffic for pods behind a load balancer service by using an egress service.
Egress service is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
You can use the
EgressService
Assign a load balancer service IP address as the source IP address for egress traffic for pods behind the load balancer service.
Assigning the load balancer IP address as the source IP address in this context is useful to present a single point of egress and ingress. For example, in some scenarios, an external system communicating with an application behind a load balancer service can expect the source and destination IP address for the application to be the same.
NoteWhen you assign the load balancer service IP address to egress traffic for pods behind the service, OVN-Kubernetes restricts the ingress and egress point to a single node. This limits the load balancing of traffic that MetalLB typically provides.
Assign the egress traffic for pods behind a load balancer to a different network than the default node network.
This is useful to assign the egress traffic for applications behind a load balancer to a different network than the default network. Typically, the different network is implemented by using a VRF instance associated with a network interface.
24.20.1. Egress service custom resource Link kopierenLink in die Zwischenablage kopiert!
Define the configuration for an egress service in an
EgressService
apiVersion: k8s.ovn.org/v1
kind: EgressService
metadata:
name: <egress_service_name>
namespace: <namespace>
spec:
sourceIPBy: <egress_traffic_ip>
nodeSelector:
matchLabels:
node-role.kubernetes.io/<role>: ""
network: <egress_traffic_network>
- 1
- Specify the name for the egress service. The name of the
EgressServiceresource must match the name of the load-balancer service that you want to modify. - 2
- Specify the namespace for the egress service. The namespace for the
EgressServicemust match the namespace of the load-balancer service that you want to modify. The egress service is namespace-scoped. - 3
- Specify the source IP address of egress traffic for pods behind a service. Valid values are
LoadBalancerIPorNetwork. Use theLoadBalancerIPvalue to assign theLoadBalancerservice ingress IP address as the source IP address for egress traffic. SpecifyNetworkto assign the network interface IP address as the source IP address for egress traffic. - 4
- Optional: If you use the
LoadBalancerIPvalue for thesourceIPByspecification, a single node handles theLoadBalancerservice traffic. Use thenodeSelectorfield to limit which node can be assigned this task. When a node is selected to handle the service traffic, OVN-Kubernetes labels the node in the following format:egress-service.k8s.ovn.org/<svc-namespace>-<svc-name>: "". When thenodeSelectorfield is not specified, any node can manage theLoadBalancerservice traffic. - 5
- Optional: Specify the routing table for egress traffic. If you do not include the
networkspecification, the egress service uses the default host network.
Example egress service specification
apiVersion: k8s.ovn.org/v1
kind: EgressService
metadata:
name: test-egress-service
namespace: test-namespace
spec:
sourceIPBy: "LoadBalancerIP"
nodeSelector:
matchLabels:
vrf: "true"
network: "2"
24.20.2. Deploying an egress service Link kopierenLink in die Zwischenablage kopiert!
You can deploy an egress service to manage egress traffic for pods behind a
LoadBalancer
The following example configures the egress traffic to have the same source IP address as the ingress IP address of the
LoadBalancer
Prerequisites
-
Install the OpenShift CLI ().
oc -
Log in as a user with privileges.
cluster-admin -
You configured MetalLB resources.
BGPPeer
Procedure
Create an
CR with the desired IP for the service:IPAddressPoolCreate a file, such as
, with content like the following example:ip-addr-pool.yamlapiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: example-pool namespace: metallb-system spec: addresses: - 172.19.0.100/32Apply the configuration for the IP address pool by running the following command:
$ oc apply -f ip-addr-pool.yaml
Create
andServiceCRs:EgressServiceCreate a file, such as
, with content like the following example:service-egress-service.yamlapiVersion: v1 kind: Service metadata: name: example-service namespace: example-namespace annotations: metallb.universe.tf/address-pool: example-pool1 spec: selector: app: example ports: - name: http protocol: TCP port: 8080 targetPort: 8080 type: LoadBalancer --- apiVersion: k8s.ovn.org/v1 kind: EgressService metadata: name: example-service namespace: example-namespace spec: sourceIPBy: "LoadBalancerIP"2 nodeSelector:3 matchLabels: node-role.kubernetes.io/worker: ""- 1
- The
LoadBalancerservice uses the IP address assigned by MetalLB from theexample-poolIP address pool. - 2
- This example uses the
LoadBalancerIPvalue to assign the ingress IP address of theLoadBalancerservice as the source IP address of egress traffic. - 3
- When you specify the
LoadBalancerIPvalue, a single node handles theLoadBalancerservice’s traffic. In this example, only nodes with theworkerlabel can be selected to handle the traffic. When a node is selected, OVN-Kubernetes labels the node in the following formategress-service.k8s.ovn.org/<svc-namespace>-<svc-name>: "".
NoteIf you use the
setting, you must specify the load-balancer node in thesourceIPBy: "LoadBalancerIP"custom resource (CR).BGPAdvertisementApply the configuration for the service and egress service by running the following command:
$ oc apply -f service-egress-service.yaml
Create a
CR to advertise the service:BGPAdvertisementCreate a file, such as
, with content like the following example:service-bgp-advertisement.yamlapiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: example-bgp-adv namespace: metallb-system spec: ipAddressPools: - example-pool nodeSelector: - matchLabels: egress-service.k8s.ovn.org/example-namespace-example-service: ""1 - 1
- In this example, the
EgressServiceCR configures the source IP address for egress traffic to use the load-balancer service IP address. Therefore, you must specify the load-balancer node for return traffic to use the same return path for the traffic originating from the pod.
Verification
Verify that you can access the application endpoint of the pods running behind the MetalLB service by running the following command:
$ curl <external_ip_address>:<port_number>1 - 1
- Update the external IP address and port number to suit your application endpoint.
-
If you assigned the service’s ingress IP address as the source IP address for egress traffic, verify this configuration by using tools such as
LoadBalancerto analyze packets received at the external client.tcpdump
24.21. Considerations for the use of an egress router pod Link kopierenLink in die Zwischenablage kopiert!
24.21.1. About an egress router pod Link kopierenLink in die Zwischenablage kopiert!
The OpenShift Container Platform egress router pod redirects traffic to a specified remote server from a private source IP address that is not used for any other purpose. An egress router pod can send network traffic to servers that are set up to allow access only from specific IP addresses.
The egress router pod is not intended for every outgoing connection. Creating large numbers of egress router pods can exceed the limits of your network hardware. For example, creating an egress router pod for every project or application could exceed the number of local MAC addresses that the network interface can handle before reverting to filtering MAC addresses in software.
The egress router image is not compatible with Amazon AWS, Azure Cloud, or any other cloud platform that does not support layer 2 manipulations due to their incompatibility with macvlan traffic.
24.21.1.1. Egress router modes Link kopierenLink in die Zwischenablage kopiert!
In redirect mode, an egress router pod configures
iptables
curl
$ curl <router_service_IP> <port>
The egress router CNI plugin supports redirect mode only. This is a difference with the egress router implementation that you can deploy with OpenShift SDN. Unlike the egress router for OpenShift SDN, the egress router CNI plugin does not support HTTP proxy mode or DNS proxy mode.
24.21.1.2. Egress router pod implementation Link kopierenLink in die Zwischenablage kopiert!
The egress router implementation uses the egress router Container Network Interface (CNI) plugin. The plugin adds a secondary network interface to a pod.
An egress router is a pod that has two network interfaces. For example, the pod can have
eth0
net1
eth0
net1
Traffic that leaves the egress router exits through a node, but the packets have the MAC address of the
net1
When you add an egress router custom resource, the Cluster Network Operator creates the following objects:
-
The network attachment definition for the secondary network interface of the pod.
net1 - A deployment for the egress router.
If you delete an egress router custom resource, the Operator deletes the two objects in the preceding list that are associated with the egress router.
24.21.1.3. Deployment considerations Link kopierenLink in die Zwischenablage kopiert!
An egress router pod adds an additional IP address and MAC address to the primary network interface of the node. As a result, you might need to configure your hypervisor or cloud provider to allow the additional address.
- Red Hat OpenStack Platform (RHOSP)
If you deploy OpenShift Container Platform on RHOSP, you must allow traffic from the IP and MAC addresses of the egress router pod on your OpenStack environment. If you do not allow the traffic, then communication will fail:
$ openstack port set --allowed-address \ ip_address=<ip_address>,mac_address=<mac_address> <neutron_port_uuid>- VMware vSphere
- If you are using VMware vSphere, see the VMware documentation for securing vSphere standard switches. View and change VMware vSphere default settings by selecting the host virtual switch from the vSphere Web Client.
Specifically, ensure that the following are enabled:
24.21.1.4. Failover configuration Link kopierenLink in die Zwischenablage kopiert!
To avoid downtime, the Cluster Network Operator deploys the egress router pod as a deployment resource. The deployment name is
egress-router-cni-deployment
app=egress-router-cni
To create a new service for the deployment, use the
oc expose deployment/egress-router-cni-deployment --port <port_number>
apiVersion: v1
kind: Service
metadata:
name: app-egress
spec:
ports:
- name: tcp-8080
protocol: TCP
port: 8080
- name: tcp-8443
protocol: TCP
port: 8443
- name: udp-80
protocol: UDP
port: 80
type: ClusterIP
selector:
app: egress-router-cni
24.22. Deploying an egress router pod in redirect mode Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can deploy an egress router pod to redirect traffic to specified destination IP addresses from a reserved source IP address.
The egress router implementation uses the egress router Container Network Interface (CNI) plugin.
24.22.1. Egress router custom resource Link kopierenLink in die Zwischenablage kopiert!
Define the configuration for an egress router pod in an egress router custom resource. The following YAML describes the fields for the configuration of an egress router in redirect mode:
apiVersion: network.operator.openshift.io/v1
kind: EgressRouter
metadata:
name: <egress_router_name>
namespace: <namespace>
spec:
addresses: [
{
ip: "<egress_router>",
gateway: "<egress_gateway>"
}
]
mode: Redirect
redirect: {
redirectRules: [
{
destinationIP: "<egress_destination>",
port: <egress_router_port>,
targetPort: <target_port>,
protocol: <network_protocol>
},
...
],
fallbackIP: "<egress_destination>"
}
- 1
- Optional: The
namespacefield specifies the namespace to create the egress router in. If you do not specify a value in the file or on the command line, thedefaultnamespace is used. - 2
- The
addressesfield specifies the IP addresses to configure on the secondary network interface. - 3
- The
ipfield specifies the reserved source IP address and netmask from the physical network that the node is on to use with egress router pod. Use CIDR notation to specify the IP address and netmask. - 4
- The
gatewayfield specifies the IP address of the network gateway. - 5
- Optional: The
redirectRulesfield specifies a combination of egress destination IP address, egress router port, and protocol. Incoming connections to the egress router on the specified port and protocol are routed to the destination IP address. - 6
- Optional: The
targetPortfield specifies the network port on the destination IP address. If this field is not specified, traffic is routed to the same network port that it arrived on. - 7
- The
protocolfield supports TCP, UDP, or SCTP. - 8
- Optional: The
fallbackIPfield specifies a destination IP address. If you do not specify any redirect rules, the egress router sends all traffic to this fallback IP address. If you specify redirect rules, any connections to network ports that are not defined in the rules are sent by the egress router to this fallback IP address. If you do not specify this field, the egress router rejects connections to network ports that are not defined in the rules.
Example egress router specification
apiVersion: network.operator.openshift.io/v1
kind: EgressRouter
metadata:
name: egress-router-redirect
spec:
networkInterface: {
macvlan: {
mode: "Bridge"
}
}
addresses: [
{
ip: "192.168.12.99/24",
gateway: "192.168.12.1"
}
]
mode: Redirect
redirect: {
redirectRules: [
{
destinationIP: "10.0.0.99",
port: 80,
protocol: UDP
},
{
destinationIP: "203.0.113.26",
port: 8080,
targetPort: 80,
protocol: TCP
},
{
destinationIP: "203.0.113.27",
port: 8443,
targetPort: 443,
protocol: TCP
}
]
}
24.22.2. Deploying an egress router in redirect mode Link kopierenLink in die Zwischenablage kopiert!
You can deploy an egress router to redirect traffic from its own reserved source IP address to one or more destination IP addresses.
After you add an egress router, the client pods that need to use the reserved source IP address must be modified to connect to the egress router rather than connecting directly to the destination IP.
Prerequisites
-
Install the OpenShift CLI ().
oc -
Log in as a user with privileges.
cluster-admin
Procedure
- Create an egress router definition.
To ensure that other pods can find the IP address of the egress router pod, create a service that uses the egress router, as in the following example:
apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: web-app protocol: TCP port: 8080 type: ClusterIP selector: app: egress-router-cni1 - 1
- Specify the label for the egress router. The value shown is added by the Cluster Network Operator and is not configurable.
After you create the service, your pods can connect to the service. The egress router pod redirects traffic to the corresponding port on the destination IP address. The connections originate from the reserved source IP address.
Verification
To verify that the Cluster Network Operator started the egress router, complete the following procedure:
View the network attachment definition that the Operator created for the egress router:
$ oc get network-attachment-definition egress-router-cni-nadThe name of the network attachment definition is not configurable.
Example output
NAME AGE egress-router-cni-nad 18mView the deployment for the egress router pod:
$ oc get deployment egress-router-cni-deploymentThe name of the deployment is not configurable.
Example output
NAME READY UP-TO-DATE AVAILABLE AGE egress-router-cni-deployment 1/1 1 1 18mView the status of the egress router pod:
$ oc get pods -l app=egress-router-cniExample output
NAME READY STATUS RESTARTS AGE egress-router-cni-deployment-575465c75c-qkq6m 1/1 Running 0 18m- View the logs and the routing table for the egress router pod.
Get the node name for the egress router pod:
$ POD_NODENAME=$(oc get pod -l app=egress-router-cni -o jsonpath="{.items[0].spec.nodeName}")Enter into a debug session on the target node. This step instantiates a debug pod called
:<node_name>-debug$ oc debug node/$POD_NODENAMESet
as the root directory within the debug shell. The debug pod mounts the root file system of the host in/hostwithin the pod. By changing the root directory to/host, you can run binaries from the executable paths of the host:/host# chroot /hostFrom within the
environment console, display the egress router logs:chroot# cat /tmp/egress-router-logExample output
2021-04-26T12:27:20Z [debug] Called CNI ADD 2021-04-26T12:27:20Z [debug] Gateway: 192.168.12.1 2021-04-26T12:27:20Z [debug] IP Source Addresses: [192.168.12.99/24] 2021-04-26T12:27:20Z [debug] IP Destinations: [80 UDP 10.0.0.99/30 8080 TCP 203.0.113.26/30 80 8443 TCP 203.0.113.27/30 443] 2021-04-26T12:27:20Z [debug] Created macvlan interface 2021-04-26T12:27:20Z [debug] Renamed macvlan to "net1" 2021-04-26T12:27:20Z [debug] Adding route to gateway 192.168.12.1 on macvlan interface 2021-04-26T12:27:20Z [debug] deleted default route {Ifindex: 3 Dst: <nil> Src: <nil> Gw: 10.128.10.1 Flags: [] Table: 254} 2021-04-26T12:27:20Z [debug] Added new default route with gateway 192.168.12.1 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat PREROUTING -i eth0 -p UDP --dport 80 -j DNAT --to-destination 10.0.0.99 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat PREROUTING -i eth0 -p TCP --dport 8080 -j DNAT --to-destination 203.0.113.26:80 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat PREROUTING -i eth0 -p TCP --dport 8443 -j DNAT --to-destination 203.0.113.27:443 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat -o net1 -j SNAT --to-source 192.168.12.99The logging file location and logging level are not configurable when you start the egress router by creating an
object as described in this procedure.EgressRouterFrom within the
environment console, get the container ID:chroot# crictl ps --name egress-router-cni-pod | awk '{print $1}'Example output
CONTAINER bac9fae69ddb6Determine the process ID of the container. In this example, the container ID is
:bac9fae69ddb6# crictl inspect -o yaml bac9fae69ddb6 | grep 'pid:' | awk '{print $2}'Example output
68857Enter the network namespace of the container:
# nsenter -n -t 68857Display the routing table:
# ip routeIn the following example output, the
network interface is the default route. Traffic for the cluster network uses thenet1network interface. Traffic for theeth0network uses the192.168.12.0/24network interface and originates from the reserved source IP addressnet1. The pod routes all other traffic to the gateway at IP address192.168.12.99. Routing for the service network is not shown.192.168.12.1Example output
default via 192.168.12.1 dev net1 10.128.10.0/23 dev eth0 proto kernel scope link src 10.128.10.18 192.168.12.0/24 dev net1 proto kernel scope link src 192.168.12.99 192.168.12.1 dev net1
24.23. Enabling multicast for a project Link kopierenLink in die Zwischenablage kopiert!
24.23.1. About multicast Link kopierenLink in die Zwischenablage kopiert!
With IP multicast, data is broadcast to many IP addresses simultaneously.
- At this time, multicast is best used for low-bandwidth coordination or service discovery and not a high-bandwidth solution.
-
By default, network policies affect all connections in a namespace. However, multicast is unaffected by network policies. If multicast is enabled in the same namespace as your network policies, it is always allowed, even if there is a network policy. Cluster administrators should consider the implications to the exemption of multicast from network policies before enabling it.
deny-all
Multicast traffic between OpenShift Container Platform pods is disabled by default. If you are using the OVN-Kubernetes network plugin, you can enable multicast on a per-project basis.
24.23.2. Enabling multicast between pods Link kopierenLink in die Zwischenablage kopiert!
You can enable multicast between pods for your project.
Prerequisites
-
Install the OpenShift CLI ().
oc -
You must log in to the cluster with a user that has the role.
cluster-admin
Procedure
Run the following command to enable multicast for a project. Replace
with the namespace for the project you want to enable multicast for.<namespace>$ oc annotate namespace <namespace> \ k8s.ovn.org/multicast-enabled=trueTipYou can alternatively apply the following YAML to add the annotation:
apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: k8s.ovn.org/multicast-enabled: "true"
Verification
To verify that multicast is enabled for a project, complete the following procedure:
Change your current project to the project that you enabled multicast for. Replace
with the project name.<project>$ oc project <project>Create a pod to act as a multicast receiver:
$ cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: mlistener labels: app: multicast-verify spec: containers: - name: mlistener image: registry.access.redhat.com/ubi9 command: ["/bin/sh", "-c"] args: ["dnf -y install socat hostname && sleep inf"] ports: - containerPort: 30102 name: mlistener protocol: UDP EOFCreate a pod to act as a multicast sender:
$ cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: msender labels: app: multicast-verify spec: containers: - name: msender image: registry.access.redhat.com/ubi9 command: ["/bin/sh", "-c"] args: ["dnf -y install socat && sleep inf"] EOFIn a new terminal window or tab, start the multicast listener.
Get the IP address for the Pod:
$ POD_IP=$(oc get pods mlistener -o jsonpath='{.status.podIP}')Start the multicast listener by entering the following command:
$ oc exec mlistener -i -t -- \ socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:$POD_IP,fork EXEC:hostname
Start the multicast transmitter.
Get the pod network IP address range:
$ CIDR=$(oc get Network.config.openshift.io cluster \ -o jsonpath='{.status.clusterNetwork[0].cidr}')To send a multicast message, enter the following command:
$ oc exec msender -i -t -- \ /bin/bash -c "echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=$CIDR,ip-multicast-ttl=64"If multicast is working, the previous command returns the following output:
mlistener
24.24. Disabling multicast for a project Link kopierenLink in die Zwischenablage kopiert!
24.24.1. Disabling multicast between pods Link kopierenLink in die Zwischenablage kopiert!
You can disable multicast between pods for your project.
Prerequisites
-
Install the OpenShift CLI ().
oc -
You must log in to the cluster with a user that has the role.
cluster-admin
Procedure
Disable multicast by running the following command:
$ oc annotate namespace <namespace> \1 k8s.ovn.org/multicast-enabled-- 1
- The
namespacefor the project you want to disable multicast for.
TipYou can alternatively apply the following YAML to delete the annotation:
apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: k8s.ovn.org/multicast-enabled: null
24.25. Tracking network flows Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can collect information about pod network flows from your cluster to assist with the following areas:
- Monitor ingress and egress traffic on the pod network.
- Troubleshoot performance issues.
- Gather data for capacity planning and security audits.
When you enable the collection of the network flows, only the metadata about the traffic is collected. For example, packet data is not collected, but the protocol, source address, destination address, port numbers, number of bytes, and other packet-level information is collected.
The data is collected in one or more of the following record formats:
- NetFlow
- sFlow
- IPFIX
When you configure the Cluster Network Operator (CNO) with one or more collector IP addresses and port numbers, the Operator configures Open vSwitch (OVS) on each node to send the network flows records to each collector.
You can configure the Operator to send records to more than one type of network flow collector. For example, you can send records to NetFlow collectors and also send records to sFlow collectors.
When OVS sends data to the collectors, each type of collector receives identical records. For example, if you configure two NetFlow collectors, OVS on a node sends identical records to the two collectors. If you also configure two sFlow collectors, the two sFlow collectors receive identical records. However, each collector type has a unique record format.
Collecting the network flows data and sending the records to collectors affects performance. Nodes process packets at a slower rate. If the performance impact is too great, you can delete the destinations for collectors to disable collecting network flows data and restore performance.
Enabling network flow collectors might have an impact on the overall performance of the cluster network.
24.25.1. Network object configuration for tracking network flows Link kopierenLink in die Zwischenablage kopiert!
The fields for configuring network flows collectors in the Cluster Network Operator (CNO) are shown in the following table:
| Field | Type | Description |
|---|---|---|
|
|
| The name of the CNO object. This name is always
|
|
|
| One or more of
|
|
|
| A list of IP address and network port pairs for up to 10 collectors. |
|
|
| A list of IP address and network port pairs for up to 10 collectors. |
|
|
| A list of IP address and network port pairs for up to 10 collectors. |
After applying the following manifest to the CNO, the Operator configures Open vSwitch (OVS) on each node in the cluster to send network flows records to the NetFlow collector that is listening at
192.168.1.99:2056
Example configuration for tracking network flows
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
exportNetworkFlows:
netFlow:
collectors:
- 192.168.1.99:2056
24.25.2. Adding destinations for network flows collectors Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can configure the Cluster Network Operator (CNO) to send network flows metadata about the pod network to a network flows collector.
Prerequisites
-
You installed the OpenShift CLI ().
oc -
You are logged in to the cluster with a user with privileges.
cluster-admin - You have a network flows collector and know the IP address and port that it listens on.
Procedure
Create a patch file that specifies the network flows collector type and the IP address and port information of the collectors:
spec: exportNetworkFlows: netFlow: collectors: - 192.168.1.99:2056Configure the CNO with the network flows collectors:
$ oc patch network.operator cluster --type merge -p "$(cat <file_name>.yaml)"Example output
network.operator.openshift.io/cluster patched
Verification
Verification is not typically necessary. You can run the following command to confirm that Open vSwitch (OVS) on each node is configured to send network flows records to one or more collectors.
View the Operator configuration to confirm that the
field is configured:exportNetworkFlows$ oc get network.operator cluster -o jsonpath="{.spec.exportNetworkFlows}"Example output
{"netFlow":{"collectors":["192.168.1.99:2056"]}}View the network flows configuration in OVS from each node:
$ for pod in $(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node -o jsonpath='{range@.items[*]}{.metadata.name}{"\n"}{end}'); do ; echo; echo $pod; oc -n openshift-ovn-kubernetes exec -c ovnkube-controller $pod \ -- bash -c 'for type in ipfix sflow netflow ; do ovs-vsctl find $type ; done'; doneExample output
ovnkube-node-xrn4p _uuid : a4d2aaca-5023-4f3d-9400-7275f92611f9 active_timeout : 60 add_id_to_interface : false engine_id : [] engine_type : [] external_ids : {} targets : ["192.168.1.99:2056"] ovnkube-node-z4vq9 _uuid : 61d02fdb-9228-4993-8ff5-b27f01a29bd6 active_timeout : 60 add_id_to_interface : false engine_id : [] engine_type : [] external_ids : {} targets : ["192.168.1.99:2056"]- ...
24.25.3. Deleting all destinations for network flows collectors Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can configure the Cluster Network Operator (CNO) to stop sending network flows metadata to a network flows collector.
Prerequisites
-
You installed the OpenShift CLI ().
oc -
You are logged in to the cluster with a user with privileges.
cluster-admin
Procedure
Remove all network flows collectors:
$ oc patch network.operator cluster --type='json' \ -p='[{"op":"remove", "path":"/spec/exportNetworkFlows"}]'Example output
network.operator.openshift.io/cluster patched
24.26. Configuring hybrid networking Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can configure the Red Hat OpenShift Networking OVN-Kubernetes network plugin to allow Linux and Windows nodes to host Linux and Windows workloads, respectively.
24.26.1. Configuring hybrid networking with OVN-Kubernetes Link kopierenLink in die Zwischenablage kopiert!
You can configure your cluster to use hybrid networking with the OVN-Kubernetes network plugin. This allows a hybrid cluster that supports different node networking configurations.
This configuration is necessary to run both Linux and Windows nodes in the same cluster.
Prerequisites
-
Install the OpenShift CLI ().
oc -
Log in to the cluster as a user with privileges.
cluster-admin - Ensure that the cluster uses the OVN-Kubernetes network plugin.
Procedure
To configure the OVN-Kubernetes hybrid network overlay, enter the following command:
$ oc patch networks.operator.openshift.io cluster --type=merge \ -p '{ "spec":{ "defaultNetwork":{ "ovnKubernetesConfig":{ "hybridOverlayConfig":{ "hybridClusterNetwork":[ { "cidr": "<cidr>", "hostPrefix": <prefix> } ], "hybridOverlayVXLANPort": <overlay_port> } } } } }'where:
cidr- Specify the CIDR configuration used for nodes on the additional overlay network. This CIDR must not overlap with the cluster network CIDR.
hostPrefix-
Specifies the subnet prefix length to assign to each individual node. For example, if
hostPrefixis set to23, then each node is assigned a/23subnet out of the givencidr, which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. hybridOverlayVXLANPort-
Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default
4789port. For more information on this requirement, see the Microsoft documentation on Pod-to-pod connectivity between hosts is broken.
NoteWindows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 is not supported on clusters with a custom
value because this Windows server version does not support selecting a custom VXLAN port.hybridOverlayVXLANPortExample output
network.operator.openshift.io/cluster patchedTo confirm that the configuration is active, enter the following command. It can take several minutes for the update to apply.
$ oc get network.operator.openshift.io -o jsonpath="{.items[0].spec.defaultNetwork.ovnKubernetesConfig}"