Networking
Configuring and managing cluster networking
Abstract
Chapter 1. About the OVN-Kubernetes network plugin Copy linkLink copied to clipboard!
The OVN-Kubernetes Container Network Interface (CNI) plugin is the default networking solution for MicroShift clusters. OVN-Kubernetes is a virtualized network for pods and services that is based on Open Virtual Network (OVN).
-
Default network configuration and connections are applied automatically in MicroShift with the
microshift-networkingRPM during installation. - A cluster that uses the OVN-Kubernetes network plugin also runs Open vSwitch (OVS) on the node.
- OVN-K configures OVS on the node to implement the declared network configuration.
-
Host physical interfaces are not bound by default to the OVN-K gateway bridge,
br-ex. You can use standard tools on the host for managing the default gateway, such as the Network Manager CLI (nmcli). - Changing the CNI is not supported on MicroShift.
Using configuration files or custom scripts, you can configure the following networking settings:
- You can use subnet CIDR ranges to allocate IP addresses to pods.
- You can change the maximum transmission unit (MTU) value.
- You can configure firewall ingress and egress.
- You can define network policies in the MicroShift cluster, including ingress and egress rules.
1.1. MicroShift networking customization matrix Copy linkLink copied to clipboard!
The following table summarizes the status of networking features and capabilities that are either present as defaults, supported for configuration, or not available with the MicroShift service:
| Network feature | Availability | Customization supported |
|---|---|---|
| Advertise address | Yes | Yes [1] |
| Kubernetes network policy | Yes | Yes |
| Kubernetes network policy logs | Not available | N/A |
| Load balancing | Yes | Yes |
| Multicast DNS | Yes | Yes [2] |
| Network proxies | Yes [3] | CRI-O |
| Network performance | Yes | MTU configuration |
| Egress IPs | Not available | N/A |
| Egress firewall | Not available | N/A |
| Egress router | Not available | N/A |
| Firewall | No [4] | Yes |
| Hardware offloading | Not available | N/A |
| Hybrid networking | Not available | N/A |
| IPsec encryption for intra-cluster communication | Not available | N/A |
| IPv6 | Not available [5] | N/A |
-
If unset, the default value is set to the next immediate subnet after the service network. For example, when the service network is
10.43.0.0/16, theadvertiseAddressis set to10.44.0.0/32. -
You can use the multicast DNS protocol (mDNS) to allow name resolution and service discovery within a Local Area Network (LAN) using multicast exposed on the
5353/UDPport. - There is no built-in transparent proxying of egress traffic in MicroShift. Egress must be manually configured.
- Setting up the firewalld service is supported by RHEL for Edge.
- IPv6 is not available in any configuration.
1.1.1. Default settings Copy linkLink copied to clipboard!
If you do not create a config.yaml file, default values are used. The following example shows the default configuration settings.
To see the default values, run the following command:
microshift show-config
$ microshift show-configCopy to Clipboard Copied! Toggle word wrap Toggle overflow Default values example output in YAML form
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Base domain of the cluster. All managed DNS records will be subdomains of this base.
- 2
- A block of IP addresses from which Pod IP addresses are allocated.
- 3
- A block of virtual IP addresses for Kubernetes services.
- 4
- The port range allowed for Kubernetes services of type NodePort.
- 5
- The name of the node. The default value is the hostname.
- 6
- The IP address of the node. The default value is the IP address of the default route.
- 7
- A string that specifies the IP address from which the API server is advertised to members of the cluster. The default value is calculated based on the address of the service network.
- 8
- Subject Alternative Names for API server certificates.
- 9
- Log verbosity. Valid values for this field are
Normal,Debug,Trace, orTraceAll.
1.2. Network features Copy linkLink copied to clipboard!
Networking features available with MicroShift 4.15 include:
- Kubernetes network policy
- Dynamic node IP
- Custom gateway interface
- Second gateway interface
- Cluster network on specified host interface
- Blocking external access to NodePort service on specific host interfaces
Networking features not available with MicroShift 4.15:
- Egress IP/firewall/QoS: disabled
- Hybrid networking: not supported
- IPsec: not supported
- Hardware offload: not supported
1.3. IP forward Copy linkLink copied to clipboard!
The host network sysctl net.ipv4.ip_forward kernel parameter is automatically enabled by the ovnkube-master container when started. This is required to forward incoming traffic to the CNI. For example, accessing the NodePort service from outside of a cluster fails if ip_forward is disabled.
1.4. Network performance optimizations Copy linkLink copied to clipboard!
By default, three performance optimizations are applied to OVS services to minimize resource consumption:
-
CPU affinity to
ovs-vswitchd.serviceandovsdb-server.service -
no-mlockalltoopenvswitch.service -
Limit handler and
revalidatorthreads toovs-vswitchd.service
1.5. MicroShift networking components and services Copy linkLink copied to clipboard!
This brief overview describes networking components and their operation in MicroShift. The microshift-networking RPM is a package that automatically pulls in any networking-related dependencies and systemd services to initialize networking, for example, the microshift-ovs-init systemd service.
- NetworkManager
-
NetworkManager is required to set up the initial gateway bridge on the MicroShift node. The NetworkManager and
NetworkManager-ovsRPM packages are installed as dependencies to themicroshift-networkingRPM package, which contains the necessary configuration files. NetworkManager in MicroShift uses thekeyfileplugin and is restarted after installation of themicroshift-networkingRPM package. - microshift-ovs-init
-
The
microshift-ovs-init.serviceis installed by themicroshift-networkingRPM package as a dependent systemd service tomicroshift.service. It is responsible for setting up the OVS gateway bridge. - OVN containers
Two OVN-Kubernetes daemon sets are rendered and applied by MicroShift.
-
ovnkube-master Includes the
northd,nbdb,sbdbandovnkube-mastercontainers. ovnkube-node The ovnkube-node includes the OVN-Controller container.
After MicroShift starts, the OVN-Kubernetes daemon sets are deployed in the
openshift-ovn-kubernetesnamespace.
-
ovnkube-master Includes the
- Packaging
OVN-Kubernetes manifests and startup logic are built into MicroShift. The systemd services and configurations included in the
microshift-networkingRPM are:-
/etc/NetworkManager/conf.d/microshift-nm.confforNetworkManager.service -
/etc/systemd/system/ovs-vswitchd.service.d/microshift-cpuaffinity.confforovs-vswitchd.service -
/etc/systemd/system/ovsdb-server.service.d/microshift-cpuaffinity.confforovs-server.service -
/usr/bin/configure-ovs-microshift.shformicroshift-ovs-init.service -
/usr/bin/configure-ovs.shformicroshift-ovs-init.service -
/etc/crio/crio.conf.d/microshift-ovn.conffor the CRI-O service
-
1.6. Bridge mappings Copy linkLink copied to clipboard!
Bridge mappings allow provider network traffic to reach the physical network. Traffic leaves the provider network and arrives at the br-int bridge. A patch port between br-int and br-ex then allows the traffic to traverse to and from the provider network and the edge network. Kubernetes pods are connected to the br-int bridge through virtual ethernet pair: one end of the virtual ethernet pair is attached to the pod namespace, and the other end is attached to the br-int bridge.
1.7. Network topology Copy linkLink copied to clipboard!
OVN-Kubernetes provides an overlay-based networking implementation. This overlay includes an OVS-based implementation of Service and NetworkPolicy. The overlay network uses the Geneve (Generic Network Virtualization Encapsulation) tunnel protocol. The pod maximum transmission unit (MTU) for the Geneve tunnel is set to the default route MTU if it is not configured.
To configure the MTU, you must set an equal-to or less-than value than the MTU of the physical interface on the host. A less-than value for the MTU makes room for the required information that is added to the tunnel header before it is transmitted.
OVS runs as a systemd service on the MicroShift node. The OVS RPM package is installed as a dependency to the microshift-networking RPM package. OVS is started immediately when the microshift-networking RPM is installed.
Red Hat build of MicroShift network topology
1.7.1. Description of the OVN logical components of the virtualized network Copy linkLink copied to clipboard!
- OVN node switch
A virtual switch named
<node-name>. The OVN node switch is named according to the hostname of the node.-
In this example, the
node-nameismicroshift-dev.
-
In this example, the
- OVN cluster router
A virtual router named
ovn_cluster_router, also known as the distributed router.-
In this example, the cluster network is
10.42.0.0/16.
-
In this example, the cluster network is
- OVN join switch
-
A virtual switch named
join. - OVN gateway router
-
A virtual router named
GR_<node-name>, also known as the external gateway router. - OVN external switch
-
A virtual switch named
ext_<node-name>.
1.7.2. Description of the connections in the network topology figure Copy linkLink copied to clipboard!
-
The north-south traffic between the network service and the OVN external switch
ext_microshift-devis provided through the host kernel by the gateway bridgebr-ex. -
The OVN gateway router
GR_microshift-devis connected to the external network switchext_microshift-devthrough the logical router port 4. Port 4 is attached with the node IP address 192.168.122.14. The join switch
joinconnects the OVN gateway routerGR_microshift-devto the OVN cluster routerovn_cluster_router. The IP address range is 100.62.0.0/16.-
The OVN gateway router
GR_microshift-devconnects to the OVN join switchjointhrough the logical router port 3. Port 3 attaches with the internal IP address 100.64.0.2. -
The OVN cluster router
ovn_cluster_routerconnects to the join switchjointhrough the logical router port 2. Port 2 attaches with the internal IP address 100.64.0.1.
-
The OVN gateway router
-
The OVN cluster router
ovn_cluster_routerconnects to the node switchmicroshift-devthrough the logical router port 1. Port 1 is attached with the OVN cluster network IP address 10.42.0.1. -
The east-west traffic between the pods and the network service is provided by the OVN cluster router
ovn_cluster_routerand the node switchmicroshift-dev. The IP address range is 10.42.0.0/24. -
The east-west traffic between pods is provided by the node switch
microshift-devwithout network address translation (NAT). -
The north-south traffic between the pods and the external network is provided by the OVN cluster router
ovn_cluster_routerand the host network. This router is connected through theovn-kubernetesmanagement portovn-k8s-mp0, with the IP address 10.42.0.2. All the pods are connected to the OVN node switch through their interfaces.
-
In this example, Pod 1 and Pod 2 are connected to the node switch through
Interface 1andInterface 2.
-
In this example, Pod 1 and Pod 2 are connected to the node switch through
Chapter 2. Understanding networking settings Copy linkLink copied to clipboard!
Learn how to apply networking customization and default settings to MicroShift deployments. Each node is contained to a single machine and single MicroShift, so each deployment requires individual configuration, pods, and settings.
Cluster Administrators have several options for exposing applications that run inside a cluster to external traffic and securing network connections:
- A service such as NodePort
-
API resources, such as
IngressandRoute
By default, Kubernetes allocates each pod an internal IP address for applications running within the pod. Pods and their containers can have traffic between them, but clients outside the cluster do not have direct network access to pods except when exposed with a service such as NodePort.
To troubleshoot connection problems with the NodePort service, read about the known issue in the Release Notes.
2.1. Creating an OVN-Kubernetes configuration file Copy linkLink copied to clipboard!
MicroShift uses built-in default OVN-Kubernetes values if an OVN-Kubernetes configuration file is not created. You can write an OVN-Kubernetes configuration file to /etc/microshift/ovn.yaml. An example file is provided for your configuration.
Procedure
To create your
ovn.yamlfile, run the following command:$ sudo cp /etc/microshift/ovn.yaml.default /etc/microshift/ovn.yaml
$ sudo cp /etc/microshift/ovn.yaml.default /etc/microshift/ovn.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow To list the contents of the configuration file you created, run the following command:
$ cat /etc/microshift/ovn.yaml
$ cat /etc/microshift/ovn.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example YAML file with default maximum transmission unit (MTU) value
mtu: 1400
mtu: 1400Copy to Clipboard Copied! Toggle word wrap Toggle overflow To customize your configuration, you can change the MTU value. The table that follows provides details:
Expand Table 2.1. Supported optional OVN-Kubernetes configurations for MicroShift Field Type Default Description Example mtu
uint32
auto
MTU value used for the pods
1300
ImportantIf you change the
mtuconfiguration value in theovn.yamlfile, you must restart the host that Red Hat build of MicroShift is running on to apply the updated setting.Example custom
ovn.yamlconfiguration filemtu: 1300
mtu: 1300Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.2. Restarting the ovnkube-master pod Copy linkLink copied to clipboard!
The following procedure restarts the ovnkube-master pod.
Prerequisites
-
The OpenShift CLI (
oc) is installed. -
Access to the cluster as a user with the
cluster-adminrole. - A cluster installed on infrastructure configured with the OVN-Kubernetes network plugin.
- The KUBECONFIG environment variable is set.
Procedure
Use the following steps to restart the ovnkube-master pod.
Access the remote cluster by running the following command:
export KUBECONFIG=$PWD/kubeconfig
$ export KUBECONFIG=$PWD/kubeconfigCopy to Clipboard Copied! Toggle word wrap Toggle overflow Find the name of the
ovnkube-masterpod that you want to restart by running the following command:pod=$(oc get pods -n openshift-ovn-kubernetes | awk -F " " '/ovnkube-master/{print $1}')$ pod=$(oc get pods -n openshift-ovn-kubernetes | awk -F " " '/ovnkube-master/{print $1}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
ovnkube-masterpod by running the following command:oc -n openshift-ovn-kubernetes delete pod $pod
$ oc -n openshift-ovn-kubernetes delete pod $podCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that a new
ovnkube-masterpod is running by using the following command:oc get pods -n openshift-ovn-kubernetes
$ oc get pods -n openshift-ovn-kubernetesCopy to Clipboard Copied! Toggle word wrap Toggle overflow The listing of the running pods shows a new
ovnkube-masterpod name and age.
2.3. Deploying MicroShift behind an HTTP or HTTPS proxy Copy linkLink copied to clipboard!
Deploy a MicroShift cluster behind an HTTP or HTTPS proxy when you want to add basic anonymity and security measures to your pods.
You must configure the host operating system to use the proxy service with all components initiating HTTP or HTTPS requests when deploying MicroShift behind a proxy.
All the user-specific workloads or pods with egress traffic, such as accessing cloud services, must be configured to use the proxy. There is no built-in transparent proxying of egress traffic in MicroShift.
2.4. Using the RPM-OStree HTTP or HTTPS proxy Copy linkLink copied to clipboard!
To use the HTTP or HTTPS proxy in RPM-OStree, you must add a Service section to the configuration file and set the http_proxy environment variable for the rpm-ostreed service.
Procedure
Add this setting to the
/etc/systemd/system/rpm-ostreed.service.d/00-proxy.conffile:[Service] Environment="http_proxy=http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/"
[Service] Environment="http_proxy=http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Next, reload the configuration settings and restart the service to apply your changes.
Reload the configuration settings by running the following command:
sudo systemctl daemon-reload
$ sudo systemctl daemon-reloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the
rpm-ostreedservice by running the following command:sudo systemctl restart rpm-ostreed.service
$ sudo systemctl restart rpm-ostreed.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5. Using a proxy in the CRI-O container runtime Copy linkLink copied to clipboard!
To use an HTTP or HTTPS proxy in CRI-O, you must add a Service section to the configuration file and set the HTTP_PROXY and HTTPS_PROXY environment variables. You can also set the NO_PROXY variable to exclude a list of hosts from being proxied.
Procedure
Create the directory for the configuration file if it does not exist:
sudo mkdir /etc/systemd/system/crio.service.d/
$ sudo mkdir /etc/systemd/system/crio.service.d/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following settings to the
/etc/systemd/system/crio.service.d/00-proxy.conffile:[Service] Environment=NO_PROXY="localhost,127.0.0.1" Environment=HTTP_PROXY="http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/" Environment=HTTPS_PROXY="http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/"
[Service] Environment=NO_PROXY="localhost,127.0.0.1" Environment=HTTP_PROXY="http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/" Environment=HTTPS_PROXY="http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/"Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantYou must define the
Servicesection of the configuration file for the environment variables or the proxy settings fail to apply.Reload the configuration settings:
sudo systemctl daemon-reload
$ sudo systemctl daemon-reloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the CRI-O service:
sudo systemctl restart crio
$ sudo systemctl restart crioCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the MicroShift service to apply the settings:
sudo systemctl restart microshift
$ sudo systemctl restart microshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that pods are started by running the following command and examining the output:
oc get all -A
$ oc get all -ACopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that MicroShift is able to pull container images by running the following command and examining the output:
sudo crictl images
$ sudo crictl imagesCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.6. Getting a snapshot of OVS interfaces from a running cluster Copy linkLink copied to clipboard!
A snapshot represents the state and data of OVS interfaces at a specific point in time.
Procedure
To see a snapshot of OVS interfaces from a running MicroShift cluster, use the following command:
sudo ovs-vsctl show
$ sudo ovs-vsctl showCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example OVS interfaces in a running cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
patch-br-ex_localhost.localdomain-to-br-intandpatch-br-int-to-br-ex_localhost.localdomainare OVS patch ports that connectbr-exandbr-int. - 2
- The
patch-br-ex_localhost.localdomain-to-br-intandpatch-br-int-to-br-ex_localhost.localdomainare OVS patch ports that connectbr-exandbr-int. - 3
- The pod interface
eebee1ce5568761is named with the first 15 bits of the pod sandbox ID and is plugged into thebr-intbridge. - 4
- The pod interface
b47b1995ada84f4is named with the first 15 bits of the pod sandbox ID and is plugged into thebr-intbridge. - 5
- The pod interface
3031f43d67c167fis named with the first 15 bits of the pod sandbox ID and is plugged into thebr-intbridge. - 6
- The OVS internal port for hairpin traffic,
ovn-k8s-mp0is created by theovnkube-mastercontainer.
2.7. Deploying a load balancer for a workload Copy linkLink copied to clipboard!
MicroShift has a built-in implementation of network load balancers. The following example procedure uses the node IP address as the external IP address for the LoadBalancer service configuration file. You can use this example as guidance for how to deploy load balancers for your workloads.
Prerequisites
-
The OpenShift CLI (
oc) is installed. - You have access to the cluster as a user with the cluster administration role.
- You installed a cluster on an infrastructure configured with the OVN-Kubernetes network plugin.
-
The
KUBECONFIGenvironment variable is set.
Procedure
Verify that your pods are running by running the following command:
oc get pods -A
$ oc get pods -ACopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the example namespace by running the following commands:
NAMESPACE=nginx-lb-test
$ NAMESPACE=nginx-lb-testCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc create ns $NAMESPACE
$ oc create ns $NAMESPACECopy to Clipboard Copied! Toggle word wrap Toggle overflow The following example deploys three replicas of the test
nginxapplication in your namespace:Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can verify that the three sample replicas started successfully by running the following command:
oc get pods -n $NAMESPACE
$ oc get pods -n $NAMESPACECopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
LoadBalancerservice for thenginxtest application with the following sample commands:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou must ensure that the
portparameter is a host port that is not occupied by otherLoadBalancerservices or Red Hat build of MicroShift components.Verify that the service file exists, that the external IP address is properly assigned, and that the external IP is identical to the node IP by running the following command:
oc get svc -n $NAMESPACE
$ oc get svc -n $NAMESPACECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx LoadBalancer 10.43.183.104 192.168.1.241 81:32434/TCP 2m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx LoadBalancer 10.43.183.104 192.168.1.241 81:32434/TCP 2mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
The following command forms five connections to the example
nginxapplication using the external IP address of theLoadBalancerservice configuration. The result of the command is a list of those server IP addresses. Verify that the load balancer sends requests to all the running applications with the following command:EXTERNAL_IP=192.168.1.241 seq 5 | xargs -Iz curl -s -I http://$EXTERNAL_IP:81 | grep X-Server-IP
EXTERNAL_IP=192.168.1.241 seq 5 | xargs -Iz curl -s -I http://$EXTERNAL_IP:81 | grep X-Server-IPCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output of the previous command contains different IP addresses if the load balancer is successfully distributing the traffic to the applications, for example:
Example output
X-Server-IP: 10.42.0.41 X-Server-IP: 10.42.0.41 X-Server-IP: 10.42.0.43 X-Server-IP: 10.42.0.41 X-Server-IP: 10.42.0.43
X-Server-IP: 10.42.0.41 X-Server-IP: 10.42.0.41 X-Server-IP: 10.42.0.43 X-Server-IP: 10.42.0.41 X-Server-IP: 10.42.0.43Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.8. Blocking external access to the NodePort service on a specific host interface Copy linkLink copied to clipboard!
OVN-Kubernetes does not restrict the host interface where a NodePort service can be accessed from outside a Red Hat build of MicroShift node. The following procedure explains how to block the NodePort service on a specific host interface and restrict external access.
Prerequisites
- You must have an account with root privileges.
Procedure
Change the
NODEPORTvariable to the host port number assigned to your Kubernetes NodePort service by running the following command:export NODEPORT=30700
# export NODEPORT=30700Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the
INTERFACE_IPvalue to the IP address from the host interface that you want to block. For example:export INTERFACE_IP=192.168.150.33
# export INTERFACE_IP=192.168.150.33Copy to Clipboard Copied! Toggle word wrap Toggle overflow Insert a new rule in the
nattable PREROUTING chain to drop all packets that match the destination port and IP address. For example:sudo nft -a insert rule ip nat PREROUTING tcp dport $NODEPORT ip daddr $INTERFACE_IP drop
$ sudo nft -a insert rule ip nat PREROUTING tcp dport $NODEPORT ip daddr $INTERFACE_IP dropCopy to Clipboard Copied! Toggle word wrap Toggle overflow List the new rule by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteNote the
handlenumber of the newly added rule. You need to remove thehandlenumber in the following step.Remove the custom rule with the following sample command:
sudo nft -a delete rule ip nat PREROUTING handle 134
$ sudo nft -a delete rule ip nat PREROUTING handle 134Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.9. The multicast DNS protocol Copy linkLink copied to clipboard!
You can use the multicast DNS protocol (mDNS) to allow name resolution and service discovery within a Local Area Network (LAN) using multicast exposed on the 5353/UDP port.
MicroShift includes an embedded mDNS server for deployment scenarios in which the authoritative DNS server cannot be reconfigured to point clients to services on MicroShift. The embedded DNS server allows .local domains exposed by MicroShift to be discovered by other elements on the LAN.
2.10. Auditing exposed network ports Copy linkLink copied to clipboard!
On MicroShift, the host port can be opened by a workload in the following cases. You can check logs to view the network services.
2.10.1. hostNetwork Copy linkLink copied to clipboard!
When a pod is configured with the hostNetwork:true setting, the pod is running in the host network namespace. This configuration can independently open host ports. MicroShift component logs cannot be used to track this case, the ports are subject to firewalld rules. If the port opens in firewalld, you can view the port opening in the firewalld debug log.
Prerequisites
- You have root user access to your build host.
Procedure
Optional: You can check that the
hostNetwork:trueparameter is set in your ovnkube-node pod by using the following example command:sudo oc get pod -n openshift-ovn-kubernetes <ovnkube-node-pod-name> -o json | jq -r '.spec.hostNetwork' true
$ sudo oc get pod -n openshift-ovn-kubernetes <ovnkube-node-pod-name> -o json | jq -r '.spec.hostNetwork' trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enable debug in the firewalld log by running the following command:
sudo vi /etc/sysconfig/firewalld
$ sudo vi /etc/sysconfig/firewalld FIREWALLD_ARGS=--debug=10Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the firewalld service:
sudo systemctl restart firewalld.service
$ sudo systemctl restart firewalld.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow To verify that the debug option was added properly, run the following command:
sudo systemd-cgls -u firewalld.service
$ sudo systemd-cgls -u firewalld.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow The firewalld debug log is stored in the
/var/log/firewalldpath.Example logs for when the port open rule is added:
2023-06-28 10:46:37 DEBUG1: config.getZoneByName('public') 2023-06-28 10:46:37 DEBUG1: config.zone.7.addPort('8080', 'tcp') 2023-06-28 10:46:37 DEBUG1: config.zone.7.getSettings() 2023-06-28 10:46:37 DEBUG1: config.zone.7.update('...') 2023-06-28 10:46:37 DEBUG1: config.zone.7.Updated('public')2023-06-28 10:46:37 DEBUG1: config.getZoneByName('public') 2023-06-28 10:46:37 DEBUG1: config.zone.7.addPort('8080', 'tcp') 2023-06-28 10:46:37 DEBUG1: config.zone.7.getSettings() 2023-06-28 10:46:37 DEBUG1: config.zone.7.update('...') 2023-06-28 10:46:37 DEBUG1: config.zone.7.Updated('public')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example logs for when the port open rule is removed:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.10.2. hostPort Copy linkLink copied to clipboard!
You can access the hostPort setting logs in MicroShift. The following logs are examples for the hostPort setting:
Procedure
You can access the logs by running the following command:
journalctl -u crio | grep "local port"
$ journalctl -u crio | grep "local port"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example CRI-O logs when the host port is opened:
Jun 25 16:27:37 rhel92 crio[77216]: time="2023-06-25 16:27:37.033003098+08:00" level=info msg="Opened local port tcp:443"
$ Jun 25 16:27:37 rhel92 crio[77216]: time="2023-06-25 16:27:37.033003098+08:00" level=info msg="Opened local port tcp:443"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example CRI-O logs when the host port is closed:
Jun 25 16:24:11 rhel92 crio[77216]: time="2023-06-25 16:24:11.342088450+08:00" level=info msg="Closing host port tcp:443"
$ Jun 25 16:24:11 rhel92 crio[77216]: time="2023-06-25 16:24:11.342088450+08:00" level=info msg="Closing host port tcp:443"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.10.3. NodePort and LoadBalancer service Copy linkLink copied to clipboard!
OVN-Kubernetes opens host ports for NodePort and LoadBalancer service types. These services add iptables rules that take the ingress traffic from the host port and forwards it to the clusterIP. Logs for the NodePort and LoadBalancer services are presented in the following examples:
Procedure
To access the name of your
ovnkube-masterpods, run the following command:oc get pods -n openshift-ovn-kubernetes | awk '/ovnkube-master/{print $1}'$ oc get pods -n openshift-ovn-kubernetes | awk '/ovnkube-master/{print $1}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ovnkube-masterpod nameovnkube-master-n2shv
ovnkube-master-n2shvCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can access the
NodePortandLoadBalancerservices logs using yourovnkube-masterpod and running the following example command:oc logs -n openshift-ovn-kubernetes <ovnkube-master-pod-name> ovnkube-master | grep -E "OVN-KUBE-NODEPORT|OVN-KUBE-EXTERNALIP"
$ oc logs -n openshift-ovn-kubernetes <ovnkube-master-pod-name> ovnkube-master | grep -E "OVN-KUBE-NODEPORT|OVN-KUBE-EXTERNALIP"Copy to Clipboard Copied! Toggle word wrap Toggle overflow NodePort service:
Example logs in the ovnkube-master container of the ovnkube-master pod when a host port is open:
I0625 09:07:00.992980 2118395 iptables.go:27] Adding rule in table: nat, chain: OVN-KUBE-NODEPORT with args: "-p TCP -m addrtype --dst-type LOCAL --dport 32718 -j DNAT --to-destination 10.96.178.142:8081" for protocol: 0
$ I0625 09:07:00.992980 2118395 iptables.go:27] Adding rule in table: nat, chain: OVN-KUBE-NODEPORT with args: "-p TCP -m addrtype --dst-type LOCAL --dport 32718 -j DNAT --to-destination 10.96.178.142:8081" for protocol: 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example logs in the ovnkube-master container of the ovnkube-master pod when a host port is closed:
Deleting rule in table: nat, chain: OVN-KUBE-NODEPORT with args: "-p TCP -m addrtype --dst-type LOCAL --dport 32718 -j DNAT --to-destination 10.96.178.142:8081" for protocol: 0
$ Deleting rule in table: nat, chain: OVN-KUBE-NODEPORT with args: "-p TCP -m addrtype --dst-type LOCAL --dport 32718 -j DNAT --to-destination 10.96.178.142:8081" for protocol: 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow LoadBalancer service:
Example logs in the ovnkube-master container of the ovnkube-master pod when a host port is open:
I0625 09:34:10.406067 128902 iptables.go:27] Adding rule in table: nat, chain: OVN-KUBE-EXTERNALIP with args: "-p TCP -d 172.16.47.129 --dport 8081 -j DNAT --to-destination 10.43.114.94:8081" for protocol: 0
$ I0625 09:34:10.406067 128902 iptables.go:27] Adding rule in table: nat, chain: OVN-KUBE-EXTERNALIP with args: "-p TCP -d 172.16.47.129 --dport 8081 -j DNAT --to-destination 10.43.114.94:8081" for protocol: 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example logs in the ovnkube-master container of the ovnkube-master pod when a host port is closed:
I0625 09:37:00.976953 128902 iptables.go:63] Deleting rule in table: nat, chain: OVN-KUBE-EXTERNALIP with args: "-p TCP -d 172.16.47.129 --dport 8081 -j DNAT --to-destination 10.43.114.94:8081" for protocol: 0
$ I0625 09:37:00.976953 128902 iptables.go:63] Deleting rule in table: nat, chain: OVN-KUBE-EXTERNALIP with args: "-p TCP -d 172.16.47.129 --dport 8081 -j DNAT --to-destination 10.43.114.94:8081" for protocol: 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 3. Network policies Copy linkLink copied to clipboard!
3.1. About network policies Copy linkLink copied to clipboard!
Learn how network policies work for MicroShift to restrict or allow network traffic to pods in your cluster.
3.1.1. How network policy works in MicroShift Copy linkLink copied to clipboard!
In a cluster using the default OVN-Kubernetes Container Network Interface (CNI) plugin for MicroShift, network isolation is controlled by both firewalld, which is configured on the host, and by NetworkPolicy objects created within MicroShift. Simultaneous use of firewalld and NetworkPolicy is supported.
-
Network policies work only within boundaries of OVN-Kubernetes-controlled traffic, so they can apply to every situation except for
hostPort/hostNetworkenabled pods. -
Firewalld settings also do not apply to
hostPort/hostNetworkenabled pods.
Firewalld rules run before any NetworkPolicy is enforced.
Network policy does not apply to the host network namespace. Pods with host networking enabled are unaffected by network policy rules. However, pods connecting to the host-networked pods might be affected by the network policy rules.
Network policies cannot block traffic from localhost.
By default, all pods in a MicroShift node are accessible from other pods and network endpoints. To isolate one or more pods in a cluster, you can create NetworkPolicy objects to indicate allowed incoming connections. You can create and delete NetworkPolicy objects.
If a pod is matched by selectors in one or more NetworkPolicy objects, then the pod accepts only connections that are allowed by at least one of those NetworkPolicy objects. A pod that is not selected by any NetworkPolicy objects is fully accessible.
A network policy applies to only the TCP, UDP, ICMP, and SCTP protocols. Other protocols are not affected.
The following example NetworkPolicy objects demonstrate supporting different scenarios:
Deny all traffic:
To make a project deny by default, add a
NetworkPolicyobject that matches all pods but accepts no traffic:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Allow connections from the default router, which is the ingress in MicroShift:
To allow connections from the MicroShift default router, add the following
NetworkPolicyobject:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Only accept connections from pods within the same namespace:
To make pods accept connections from other pods in the same namespace, but reject all other connections from pods in other namespaces, add the following
NetworkPolicyobject:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Only allow HTTP and HTTPS traffic based on pod labels:
To enable only HTTP and HTTPS access to the pods with a specific label (
role=frontendin following example), add aNetworkPolicyobject similar to the following:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Accept connections by using both namespace and pod selectors:
To match network traffic by combining namespace and pod selectors, you can use a
NetworkPolicyobject similar to the following:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
NetworkPolicy objects are additive, which means you can combine multiple NetworkPolicy objects together to satisfy complex network requirements.
For example, for the NetworkPolicy objects defined in previous examples, you can define both allow-same-namespace and allow-http-and-https policies. That configuration allows the pods with the label role=frontend to accept any connection allowed by each policy. That is, connections on any port from pods in the same namespace, and connections on ports 80 and 443 from pods in any namespace.
3.1.2. Optimizations for network policy with OVN-Kubernetes network plugin Copy linkLink copied to clipboard!
When designing your network policy, refer to the following guidelines:
-
For network policies with the same
spec.podSelectorspec, it is more efficient to use one network policy with multipleingressoregressrules, than multiple network policies with subsets ofingressoregressrules. Every
ingressoregressrule based on thepodSelectorornamespaceSelectorspec generates the number of OVS flows proportional tonumber of pods selected by network policy + number of pods selected by ingress or egress rule. Therefore, it is preferable to use thepodSelectorornamespaceSelectorspec that can select as many pods as you need in one rule, instead of creating individual rules for every pod.For example, the following policy contains two rules:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following policy expresses those same two rules as one:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The same guideline applies to the
spec.podSelectorspec. If you have the sameingressoregressrules for different network policies, it might be more efficient to create one network policy with a commonspec.podSelectorspec. For example, the following two policies have different rules:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following network policy expresses those same two rules as one:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can apply this optimization when only multiple selectors are expressed as one. In cases where selectors are based on different labels, it may not be possible to apply this optimization. In those cases, consider applying some new labels for network policy optimization specifically.
3.1.2.1. NetworkPolicy CR and external IPs in OVN-Kubernetes Copy linkLink copied to clipboard!
In OVN-Kubernetes, the NetworkPolicy custom resource (CR) enforces strict isolation rules. If a service is exposed using an external IP, a network policy can block access from other namespaces unless explicitly configured to allow traffic.
To allow access to external IPs across namespaces, create a NetworkPolicy CR that explicitly permits ingress from the required namespaces and ensures traffic is allowed to the designated service ports. Without allowing traffic to the required ports, access might still be restricted.
Example output
where:
<policy_name>- Specifies your name for the policy.
<my_namespace>- Specifies the name of the namespace where the policy is deployed.
For more details, see "About network policy".
3.2. Creating network policies Copy linkLink copied to clipboard!
You can create a network policy for a namespace.
3.2.1. Example NetworkPolicy object Copy linkLink copied to clipboard!
The following annotates an example NetworkPolicy object:
- 1
- The name of the NetworkPolicy object.
- 2
- A selector that describes the pods to which the policy applies.
- 3
- A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy.
- 4
- A list of one or more destination ports on which to accept traffic.
3.2.2. Creating a network policy using the CLI Copy linkLink copied to clipboard!
To define granular rules describing ingress or egress network traffic allowed for namespaces in your cluster, you can create a network policy.
Prerequisites
-
You installed the OpenShift CLI (
oc). - You are working in the namespace that the network policy applies to.
Procedure
Create a policy rule:
Create a
<policy_name>.yamlfile:touch <policy_name>.yaml
$ touch <policy_name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<policy_name>- Specifies the network policy file name.
Define a network policy in the file that you just created, such as in the following examples:
Deny ingress from all pods in all namespaces
This is a fundamental policy, blocking all cross-pod networking other than cross-pod traffic allowed by the configuration of other Network Policies.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Allow ingress from all pods in the same namespace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Allow ingress traffic to one pod from a particular namespace
This policy allows traffic to pods labelled
pod-afrom pods running innamespace-y.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
To create the network policy object, enter the following command:
oc apply -f <policy_name>.yaml -n <namespace>
$ oc apply -f <policy_name>.yaml -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<policy_name>- Specifies the network policy file name.
<namespace>- Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
Example output
networkpolicy.networking.k8s.io/deny-by-default created
networkpolicy.networking.k8s.io/deny-by-default createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2.3. Creating a default deny all network policy Copy linkLink copied to clipboard!
This is a fundamental policy, blocking all cross-pod networking other than network traffic allowed by the configuration of other deployed network policies. This procedure enforces a default deny-by-default policy.
Prerequisites
-
You installed the OpenShift CLI (
oc). - You are working in the namespace that the network policy applies to.
Procedure
Create the following YAML that defines a
deny-by-defaultpolicy to deny ingress from all pods in all namespaces. Save the YAML in thedeny-by-default.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the policy by entering the following command:
oc apply -f deny-by-default.yaml
$ oc apply -f deny-by-default.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
networkpolicy.networking.k8s.io/deny-by-default created
networkpolicy.networking.k8s.io/deny-by-default createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2.4. Creating a network policy to allow traffic from external clients Copy linkLink copied to clipboard!
With the deny-by-default policy in place you can proceed to configure a policy that allows traffic from external clients to a pod with the label app=web.
Firewalld rules run before any NetworkPolicy is enforced.
Follow this procedure to configure a policy that allows external service from the public Internet directly or by using a Load Balancer to access the pod. Traffic is only allowed to a pod with the label app=web.
Prerequisites
-
You installed the OpenShift CLI (
oc). - You are working in the namespace that the network policy applies to.
Procedure
Create a policy that allows traffic from the public Internet directly or by using a load balancer to access the pod. Save the YAML in the
web-allow-external.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the policy by entering the following command:
oc apply -f web-allow-external.yaml
$ oc apply -f web-allow-external.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
networkpolicy.networking.k8s.io/web-allow-external created
networkpolicy.networking.k8s.io/web-allow-external createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2.5. Creating a network policy allowing traffic to an application from all namespaces Copy linkLink copied to clipboard!
Follow this procedure to configure a policy that allows traffic from all pods in all namespaces to a particular application.
Prerequisites
-
You installed the OpenShift CLI (
oc). - You are working in the namespace that the network policy applies to.
Procedure
Create a policy that allows traffic from all pods in all namespaces to a particular application. Save the YAML in the
web-allow-all-namespaces.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteBy default, if you omit specifying a
namespaceSelectorit does not select any namespaces, which means the policy allows traffic only from the namespace the network policy is deployed to.Apply the policy by entering the following command:
oc apply -f web-allow-all-namespaces.yaml
$ oc apply -f web-allow-all-namespaces.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
networkpolicy.networking.k8s.io/web-allow-all-namespaces created
networkpolicy.networking.k8s.io/web-allow-all-namespaces createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Start a web service in the
defaultnamespace by entering the following command:oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80
$ oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to deploy an
alpineimage in thesecondarynamespace and to start a shell:oc run test-$RANDOM --namespace=secondary --rm -i -t --image=alpine -- sh
$ oc run test-$RANDOM --namespace=secondary --rm -i -t --image=alpine -- shCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command in the shell and observe that the request is allowed:
wget -qO- --timeout=2 http://web.default
# wget -qO- --timeout=2 http://web.defaultCopy to Clipboard Copied! Toggle word wrap Toggle overflow Expected output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2.6. Creating a network policy allowing traffic to an application from a namespace Copy linkLink copied to clipboard!
Follow this procedure to configure a policy that allows traffic to a pod with the label app=web from a particular namespace. You might want to do this to:
- Restrict traffic to a production database only to namespaces where production workloads are deployed.
- Enable monitoring tools deployed to a particular namespace to scrape metrics from the current namespace.
Prerequisites
-
You installed the OpenShift CLI (
oc). - You are working in the namespace that the network policy applies to.
Procedure
Create a policy that allows traffic from all pods in a particular namespaces with a label
purpose=production. Save the YAML in theweb-allow-prod.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the policy by entering the following command:
oc apply -f web-allow-prod.yaml
$ oc apply -f web-allow-prod.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
networkpolicy.networking.k8s.io/web-allow-prod created
networkpolicy.networking.k8s.io/web-allow-prod createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Start a web service in the
defaultnamespace by entering the following command:oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80
$ oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to create the
prodnamespace:oc create namespace prod
$ oc create namespace prodCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to label the
prodnamespace:oc label namespace/prod purpose=production
$ oc label namespace/prod purpose=productionCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to create the
devnamespace:oc create namespace dev
$ oc create namespace devCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to label the
devnamespace:oc label namespace/dev purpose=testing
$ oc label namespace/dev purpose=testingCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to deploy an
alpineimage in thedevnamespace and to start a shell:oc run test-$RANDOM --namespace=dev --rm -i -t --image=alpine -- sh
$ oc run test-$RANDOM --namespace=dev --rm -i -t --image=alpine -- shCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command in the shell and observe that the request is blocked:
wget -qO- --timeout=2 http://web.default
# wget -qO- --timeout=2 http://web.defaultCopy to Clipboard Copied! Toggle word wrap Toggle overflow Expected output
wget: download timed out
wget: download timed outCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to deploy an
alpineimage in theprodnamespace and start a shell:oc run test-$RANDOM --namespace=prod --rm -i -t --image=alpine -- sh
$ oc run test-$RANDOM --namespace=prod --rm -i -t --image=alpine -- shCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command in the shell and observe that the request is allowed:
wget -qO- --timeout=2 http://web.default
# wget -qO- --timeout=2 http://web.defaultCopy to Clipboard Copied! Toggle word wrap Toggle overflow Expected output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3. Editing a network policy Copy linkLink copied to clipboard!
You can edit an existing network policy for a namespace. Typical edits might include changes to the pods to which the policy applies, allowed ingress traffic, and the destination ports on which to accept traffic. The apiVersion, kind, and name fields must not be changed when editing NetworkPolicy objects, as these define the resource itself.
3.3.1. Editing a network policy Copy linkLink copied to clipboard!
You can edit a network policy in a namespace.
Prerequisites
-
You installed the OpenShift CLI (
oc). - You are working in the namespace where the network policy exists.
Procedure
Optional: To list the network policy objects in a namespace, enter the following command:
oc get networkpolicy
$ oc get networkpolicyCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<namespace>- Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
Edit the network policy object.
If you saved the network policy definition in a file, edit the file and make any necessary changes, and then enter the following command.
oc apply -n <namespace> -f <policy_file>.yaml
$ oc apply -n <namespace> -f <policy_file>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<namespace>- Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
<policy_file>- Specifies the name of the file containing the network policy.
If you need to update the network policy object directly, enter the following command:
oc edit networkpolicy <policy_name> -n <namespace>
$ oc edit networkpolicy <policy_name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<policy_name>- Specifies the name of the network policy.
<namespace>- Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
Confirm that the network policy object is updated.
oc describe networkpolicy <policy_name> -n <namespace>
$ oc describe networkpolicy <policy_name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<policy_name>- Specifies the name of the network policy.
<namespace>- Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
3.3.2. Example NetworkPolicy object Copy linkLink copied to clipboard!
The following annotates an example NetworkPolicy object:
- 1
- The name of the NetworkPolicy object.
- 2
- A selector that describes the pods to which the policy applies.
- 3
- A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy.
- 4
- A list of one or more destination ports on which to accept traffic.
3.4. Deleting a network policy Copy linkLink copied to clipboard!
You can delete a network policy from a namespace.
3.4.1. Deleting a network policy using the CLI Copy linkLink copied to clipboard!
You can delete a network policy in a namespace.
Prerequisites
-
You installed the OpenShift CLI (
oc). - You are working in the namespace where the network policy exists.
Procedure
To delete a network policy object, enter the following command:
oc delete networkpolicy <policy_name> -n <namespace>
$ oc delete networkpolicy <policy_name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<policy_name>- Specifies the name of the network policy.
<namespace>- Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
Example output
networkpolicy.networking.k8s.io/default-deny deleted
networkpolicy.networking.k8s.io/default-deny deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.5. Viewing a network policy Copy linkLink copied to clipboard!
Use the following procedure to view a network policy for a namespace.
3.5.1. Viewing network policies using the CLI Copy linkLink copied to clipboard!
You can examine the network policies in a namespace.
Prerequisites
-
You installed the OpenShift CLI (
oc). - You are working in the namespace where the network policy exists.
Procedure
List network policies in a namespace:
To view network policy objects defined in a namespace, enter the following command:
oc get networkpolicy
$ oc get networkpolicyCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: To examine a specific network policy, enter the following command:
oc describe networkpolicy <policy_name> -n <namespace>
$ oc describe networkpolicy <policy_name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<policy_name>- Specifies the name of the network policy to inspect.
<namespace>- Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
For example:
oc describe networkpolicy allow-same-namespace
$ oc describe networkpolicy allow-same-namespaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Output for
oc describecommandCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 4. Using a firewall Copy linkLink copied to clipboard!
Firewalls are not required in MicroShift, but using a firewall can prevent undesired access to the MicroShift API.
4.1. About network traffic through the firewall Copy linkLink copied to clipboard!
Firewalld is a networking service that runs in the background and responds to connection requests, creating a dynamic customizable host-based firewall. If you are using Red Hat Enterprise Linux for Edge (RHEL for Edge) with MicroShift, firewalld should already be installed and you just need to configure it. Details are provided in procedures that follow. Overall, you must explicitly allow the following OVN-Kubernetes traffic when the firewalld service is running:
- CNI pod to CNI pod
- CNI pod to Host-Network pod Host-Network pod to Host-Network pod
- CNI pod
- The Kubernetes pod that uses the CNI network
- Host-Network pod
-
The Kubernetes pod that uses host network You can configure the
firewalldservice by using the following procedures. In most cases, firewalld is part of RHEL for Edge installations. If you do not have firewalld, you can install it with the simple procedure in this section.
MicroShift pods must have access to the internal CoreDNS component and API servers.
4.2. Installing the firewalld service Copy linkLink copied to clipboard!
If you are using RHEL for Edge, firewalld should be installed. To use the service, you can simply configure it. The following procedure can be used if you do not have firewalld, but want to use it.
Install and run the firewalld service for MicroShift by using the following steps.
Procedure
Optional: Check for firewalld on your system by running the following command:
rpm -q firewalld
$ rpm -q firewalldCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the
firewalldservice is not installed, run the following command:sudo dnf install -y firewalld
$ sudo dnf install -y firewalldCopy to Clipboard Copied! Toggle word wrap Toggle overflow To start the firewall, run the following command:
sudo systemctl enable firewalld --now
$ sudo systemctl enable firewalld --nowCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3. Required firewall settings Copy linkLink copied to clipboard!
An IP address range for the cluster network must be enabled during firewall configuration. You can use the default values or customize the IP address range. If you choose to customize the cluster network IP address range from the default 10.42.0.0/16 setting, you must also use the same custom range in the firewall configuration.
| IP Range | Firewall rule required | Description |
|---|---|---|
| 10.42.0.0/16 | No | Host network pod access to other pods |
| 169.254.169.1 | Yes | Host network pod access to Red Hat build of MicroShift API server |
The following are examples of commands for settings that are mandatory for firewall configuration:
Example commands
Configure host network pod access to other pods:
sudo firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16
$ sudo firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure host network pod access to services backed by Host endpoints, such as the Red Hat build of MicroShift API:
sudo firewall-cmd --permanent --zone=trusted --add-source=169.254.169.1
$ sudo firewall-cmd --permanent --zone=trusted --add-source=169.254.169.1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.4. Using optional port settings Copy linkLink copied to clipboard!
The MicroShift firewall service allows optional port settings.
Procedure
To add customized ports to your firewall configuration, use the following command syntax:
sudo firewall-cmd --permanent --zone=public --add-port=<port number>/<port protocol>
$ sudo firewall-cmd --permanent --zone=public --add-port=<port number>/<port protocol>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expand Table 4.2. Optional ports Port(s) Protocol(s) Description 80
TCP
HTTP port used to serve applications through the OpenShift Container Platform router.
443
TCP
HTTPS port used to serve applications through the OpenShift Container Platform router.
5353
UDP
mDNS service to respond for OpenShift Container Platform route mDNS hosts.
30000-32767
TCP
Port range reserved for NodePort services; can be used to expose applications on the LAN.
30000-32767
UDP
Port range reserved for NodePort services; can be used to expose applications on the LAN.
6443
TCP
HTTPS API port for the Red Hat build of MicroShift API.
The following are examples of commands used when requiring external access through the firewall to services running on MicroShift, such as port 6443 for the API server, for example, ports 80 and 443 for applications exposed through the router.
Example command
Configuring a port for the MicroShift API server:
sudo firewall-cmd --permanent --zone=public --add-port=6443/tcp
$ sudo firewall-cmd --permanent --zone=public --add-port=6443/tcpCopy to Clipboard Copied! Toggle word wrap Toggle overflow
To close unnecessary ports in your MicroShift instance, follow the procedure in "Closing unused or unnecessary ports to enhance network security".
4.5. Adding services to open ports Copy linkLink copied to clipboard!
On a MicroShift instance, you can open services on ports by using the firewall-cmd command.
Procedure
Optional: You can view all predefined services in firewalld by running the following command
sudo firewall-cmd --get-services
$ sudo firewall-cmd --get-servicesCopy to Clipboard Copied! Toggle word wrap Toggle overflow To open a service that you want on a default port, run the following example command:
sudo firewall-cmd --add-service=mdns
$ sudo firewall-cmd --add-service=mdnsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.6. Allowing network traffic through the firewall Copy linkLink copied to clipboard!
You can allow network traffic through the firewall by configuring the IP address range and inserting the DNS server to allow internal traffic from pods through the network gateway.
Procedure
Use one of the following commands to set the IP address range:
Configure the IP address range with default values by running the following command:
sudo firewall-offline-cmd --permanent --zone=trusted --add-source=10.42.0.0/16
$ sudo firewall-offline-cmd --permanent --zone=trusted --add-source=10.42.0.0/16Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the IP address range with custom values by running the following command:
sudo firewall-offline-cmd --permanent --zone=trusted --add-source=<custom IP range>
$ sudo firewall-offline-cmd --permanent --zone=trusted --add-source=<custom IP range>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
To allow internal traffic from pods through the network gateway, run the following command:
sudo firewall-offline-cmd --permanent --zone=trusted --add-source=169.254.169.1
$ sudo firewall-offline-cmd --permanent --zone=trusted --add-source=169.254.169.1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.6.1. Applying firewall settings Copy linkLink copied to clipboard!
To apply firewall settings, use the following one-step procedure:
Procedure
- After you have finished configuring network access through the firewall, run the following command to restart the firewall and apply the settings:
sudo firewall-cmd --reload
$ sudo firewall-cmd --reload
4.7. Verifying firewall settings Copy linkLink copied to clipboard!
After you have restarted the firewall, you can verify your settings by listing them.
Procedure
To verify rules added in the default public zone, such as ports-related rules, run the following command:
sudo firewall-cmd --list-all
$ sudo firewall-cmd --list-allCopy to Clipboard Copied! Toggle word wrap Toggle overflow To verify rules added in the trusted zone, such as IP-range related rules, run the following command:
sudo firewall-cmd --zone=trusted --list-all
$ sudo firewall-cmd --zone=trusted --list-allCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.8. Overview of firewall ports when a service is exposed Copy linkLink copied to clipboard!
Firewalld is often active when you run services on MicroShift. This can disrupt certain services on MicroShift because traffic to the ports might be blocked by the firewall. You must ensure that the necessary firewall ports are open if you want certain services to be accessible from outside the host. There are several options for opening your ports:
Services of the
NodePortandLoadBalancertype are automatically available with OVN-Kubernetes.In these cases, OVN-Kubernetes adds iptables rules so the traffic to the node IP address is delivered to the relevant ports. This is done using the PREROUTING rule chain and is then forwarded to the OVN-K to bypass the firewalld rules for local host ports and services. Iptables and firewalld are backed by nftables in RHEL 9. The nftables rules, which the iptables generates, always have priority over the rules that the firewalld generates.
Pods with the
HostPortparameter settings are automatically available. This also includes therouter-defaultpod, which uses ports 80 and 443.For
HostPortpods, the CRI-O config sets up iptables DNAT (Destination Network Address Translation) to the pod’s IP address and port.
These methods function for clients whether they are on the same host or on a remote host. The iptables rules, which are added by OVN-Kubernetes and CRI-O, attach to the PREROUTING and OUTPUT chains. The local traffic goes through the OUTPUT chain with the interface set to the lo type. The DNAT runs before it hits filler rules in the INPUT chain.
Because the MicroShift API server does not run in CRI-O, it is subject to the firewall configurations. You can open port 6443 in the firewall to access the API server in your MicroShift cluster.
4.10. Known firewall issue Copy linkLink copied to clipboard!
-
To avoid breaking traffic flows with a firewall reload or restart, execute firewall commands before starting RHEL. The CNI driver in MicroShift makes use of iptable rules for some traffic flows, such as those using the NodePort service. The iptable rules are generated and inserted by the CNI driver, but are deleted when the firewall reloads or restarts. The absence of the iptable rules breaks traffic flows. If firewall commands have to be executed after MicroShift is running, manually restart
ovnkube-masterpod in theopenshift-ovn-kubernetesnamespace to reset the rules controlled by the CNI driver.
Chapter 5. Configuring network settings for fully disconnected hosts Copy linkLink copied to clipboard!
Learn how to apply networking customization and settings to run MicroShift on fully disconnected hosts. A disconnected host should be the Red Hat Enterprise Linux (RHEL) operating system, versions 9.0+, whether real or virtual, that runs without network connectivity.
5.1. Preparing networking for fully disconnected hosts Copy linkLink copied to clipboard!
Use the procedure that follows to start and run MicroShift clusters on devices running fully disconnected operating systems. A MicroShift host is considered fully disconnected if it has no external network connectivity.
Typically this means that the device does not have an attached network interface controller (NIC) to provide a subnet. These steps can also be completed on a host with a NIC that is removed after setup. You can also automate these steps on a host that does not have a NIC by using the %post phase of a Kickstart file.
Configuring networking settings for disconnected environments is necessary because MicroShift requires a network device to support cluster communication. To meet this requirement, you must configure MicroShift networking settings to use the "fake" IP address you assign to the system loopback device during setup.
5.1.1. Procedure summary Copy linkLink copied to clipboard!
To run MicroShift on a disconnected host, the following steps are required:
- Prepare the host
- Stop MicroShift if it is currently running and clean up changes the service has made to the network.
- Set a persistent hostname.
- Add a “fake” IP address on the loopback interface.
- Configure DNS to use the fake IP as local name server.
-
Add an entry for the hostname to
/etc/hosts.
- Update the MicroShift configuration
-
Define the
nodeIPparameter as the new loopback IP address. -
Set the
.node.hostnameOverrideparameter to the persistent hostname.
-
Define the
- For the changes to take effect
- Disable the default NIC if attached.
- Restart the host or device.
After starting, MicroShift runs using the loopback device for within-cluster communication.
5.2. Restoring MicroShift networking settings to default Copy linkLink copied to clipboard!
You can remove networking customizations and return the network to default settings by stopping MicroShift and running a clean-up script.
Prerequisites
- RHEL 9 or newer.
- MicroShift 4.14 or newer.
- Access to the host CLI.
Procedure
Stop the MicroShift service by running the following command:
sudo systemctl stop microshift
$ sudo systemctl stop microshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow Stop the
kubepods.slicesystemd unit by running the following command:sudo systemctl stop kubepods.slice
$ sudo systemctl stop kubepods.sliceCopy to Clipboard Copied! Toggle word wrap Toggle overflow MicroShift installs a helper script to undo network changes made by OVN-K. Run the cleanup script by entering the following command:
sudo /usr/bin/microshift-cleanup-data --ovn
$ sudo /usr/bin/microshift-cleanup-data --ovnCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3. Configuring the networking settings for fully disconnected hosts Copy linkLink copied to clipboard!
To configure the networking settings for running MicroShift on a fully disconnected host, you must prepare the host, update the networking configuration, then restart to apply the new settings. All commands are executed from the host CLI.
Prerequisites
- RHEL 9 or newer.
- MicroShift 4.14 or newer.
- Access to the host CLI.
- A valid IP address chosen to avoid both internal and potential future external IP conflicts when running MicroShift.
- MicroShift networking settings are set to defaults.
The following procedure is for use cases in which access to the MicroShift cluster is not required after devices are deployed in the field. There is no remote cluster access after the network connection is removed.
Procedure
Add a fake IP address to the loopback interface by running the following command:
IP="10.44.0.1" sudo nmcli con add type loopback con-name stable-microshift ifname lo ip4 ${IP}/32$ IP="10.44.0.1"1 $ sudo nmcli con add type loopback con-name stable-microshift ifname lo ip4 ${IP}/32Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The fake IP address used in this example is “10.44.0.1”.
NoteAny valid IP works if it avoids both internal MicroShift and potential future external IP conflicts. This can be any subnet that does not collide with the MicroShift node subnet or is be accessed by other services on the device.
Configure the DNS interface to use the local name server by setting modifying the settings to ignore automatic DNS and reset it to the local name server:
Bypass the automatic DNS by running the following command:
sudo nmcli conn modify stable-microshift ipv4.ignore-auto-dns yes
$ sudo nmcli conn modify stable-microshift ipv4.ignore-auto-dns yesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Point the DNS interface to use the local name server:
sudo nmcli conn modify stable-microshift ipv4.dns "10.44.1.1"
$ sudo nmcli conn modify stable-microshift ipv4.dns "10.44.1.1"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Get the hostname of the device by running the following command:
NAME="$(hostnamectl hostname)"
$ NAME="$(hostnamectl hostname)"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add an entry for the hostname of the node in the
/etc/hostsfile by running the following command:echo "$IP $NAME" | sudo tee -a /etc/hosts >/dev/null
$ echo "$IP $NAME" | sudo tee -a /etc/hosts >/dev/nullCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the MicroShift configuration file by adding the following YAML snippet to
/etc/microshift/config.yaml:sudo tee /etc/microshift/config.yaml > /dev/null <<EOF node: hostnameOverride: $(echo $NAME) nodeIP: $(echo $IP) EOF
sudo tee /etc/microshift/config.yaml > /dev/null <<EOF node: hostnameOverride: $(echo $NAME) nodeIP: $(echo $IP) EOFCopy to Clipboard Copied! Toggle word wrap Toggle overflow MicroShift is now ready to use the loopback device for cluster communications. Finish preparing the device for offline use.
- If the device currently has a NIC attached, disconnect the device from the network.
- Shut down the device and disconnect the NIC.
- Restart the device for the offline configuration to take effect.
Restart the MicroShift host to apply the configuration changes by running the following command:
sudo systemctl reboot
$ sudo systemctl reboot1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This step restarts the cluster. Wait for the greenboot health check to report the system healthy before implementing verification.
Verification
At this point, network access to the MicroShift host has been severed. If you have access to the host terminal, you can use the host CLI to verify that the cluster has started in a stable state.
Verify that the MicroShift cluster is running by entering the following command:
export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig sudo -E oc get pods -A
$ export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig $ sudo -E oc get pods -ACopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow